00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 989 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3651 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.140 Using shallow fetch with depth 1 00:00:00.140 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.183 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.495 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.506 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.518 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.518 > git config core.sparsecheckout # timeout=10 00:00:05.529 > git read-tree -mu HEAD # timeout=10 00:00:05.544 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.567 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.567 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.676 [Pipeline] Start of Pipeline 00:00:05.689 [Pipeline] library 00:00:05.691 Loading library shm_lib@master 00:00:05.691 Library shm_lib@master is cached. Copying from home. 00:00:05.706 [Pipeline] node 00:00:05.719 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.720 [Pipeline] { 00:00:05.730 [Pipeline] catchError 00:00:05.732 [Pipeline] { 00:00:05.743 [Pipeline] wrap 00:00:05.752 [Pipeline] { 00:00:05.760 [Pipeline] stage 00:00:05.762 [Pipeline] { (Prologue) 00:00:05.783 [Pipeline] echo 00:00:05.784 Node: VM-host-SM38 00:00:05.792 [Pipeline] cleanWs 00:00:05.804 [WS-CLEANUP] Deleting project workspace... 00:00:05.804 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.811 [WS-CLEANUP] done 00:00:06.050 [Pipeline] setCustomBuildProperty 00:00:06.142 [Pipeline] httpRequest 00:00:06.492 [Pipeline] echo 00:00:06.494 Sorcerer 10.211.164.20 is alive 00:00:06.502 [Pipeline] retry 00:00:06.503 [Pipeline] { 00:00:06.515 [Pipeline] httpRequest 00:00:06.520 HttpMethod: GET 00:00:06.521 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.521 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.533 Response Code: HTTP/1.1 200 OK 00:00:06.534 Success: Status code 200 is in the accepted range: 200,404 00:00:06.534 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.688 [Pipeline] } 00:00:09.707 [Pipeline] // retry 00:00:09.715 [Pipeline] sh 00:00:10.004 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.029 [Pipeline] httpRequest 00:00:10.338 [Pipeline] echo 00:00:10.340 Sorcerer 10.211.164.20 is alive 00:00:10.350 [Pipeline] retry 00:00:10.352 [Pipeline] { 00:00:10.366 [Pipeline] httpRequest 00:00:10.372 HttpMethod: GET 00:00:10.372 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:10.373 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:10.392 Response Code: HTTP/1.1 200 OK 00:00:10.392 Success: Status code 200 is in the accepted range: 200,404 00:00:10.393 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:54.059 [Pipeline] } 00:01:54.079 [Pipeline] // retry 00:01:54.088 [Pipeline] sh 00:01:54.380 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:56.938 [Pipeline] sh 00:01:57.224 + git -C spdk log --oneline -n5 00:01:57.224 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:57.224 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:57.224 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:57.224 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:01:57.224 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:01:57.248 [Pipeline] withCredentials 00:01:57.261 > git --version # timeout=10 00:01:57.276 > git --version # 'git version 2.39.2' 00:01:57.297 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:57.300 [Pipeline] { 00:01:57.310 [Pipeline] retry 00:01:57.312 [Pipeline] { 00:01:57.330 [Pipeline] sh 00:01:57.620 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:57.635 [Pipeline] } 00:01:57.656 [Pipeline] // retry 00:01:57.663 [Pipeline] } 00:01:57.682 [Pipeline] // withCredentials 00:01:57.694 [Pipeline] httpRequest 00:01:58.082 [Pipeline] echo 00:01:58.084 Sorcerer 10.211.164.20 is alive 00:01:58.095 [Pipeline] retry 00:01:58.097 [Pipeline] { 00:01:58.114 [Pipeline] httpRequest 00:01:58.120 HttpMethod: GET 00:01:58.121 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:58.122 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:58.127 Response Code: HTTP/1.1 200 OK 00:01:58.128 Success: Status code 200 is in the accepted range: 200,404 00:01:58.129 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:31.281 [Pipeline] } 00:02:31.298 [Pipeline] // retry 00:02:31.306 [Pipeline] sh 00:02:31.593 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:32.990 [Pipeline] sh 00:02:33.273 + git -C dpdk log --oneline -n5 00:02:33.274 eeb0605f11 version: 23.11.0 00:02:33.274 238778122a doc: update release notes for 23.11 00:02:33.274 46aa6b3cfc doc: fix description of RSS features 00:02:33.274 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:33.274 7e421ae345 devtools: support skipping forbid rule check 00:02:33.292 [Pipeline] writeFile 00:02:33.307 [Pipeline] sh 00:02:33.592 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:33.606 [Pipeline] sh 00:02:33.889 + cat autorun-spdk.conf 00:02:33.889 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.889 SPDK_TEST_NVME=1 00:02:33.889 SPDK_TEST_FTL=1 00:02:33.889 SPDK_TEST_ISAL=1 00:02:33.889 SPDK_RUN_ASAN=1 00:02:33.889 SPDK_RUN_UBSAN=1 00:02:33.889 SPDK_TEST_XNVME=1 00:02:33.889 SPDK_TEST_NVME_FDP=1 00:02:33.889 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:33.889 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:33.889 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.898 RUN_NIGHTLY=1 00:02:33.901 [Pipeline] } 00:02:33.917 [Pipeline] // stage 00:02:33.936 [Pipeline] stage 00:02:33.938 [Pipeline] { (Run VM) 00:02:33.951 [Pipeline] sh 00:02:34.237 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:34.237 + echo 'Start stage prepare_nvme.sh' 00:02:34.237 Start stage prepare_nvme.sh 00:02:34.237 + [[ -n 7 ]] 00:02:34.237 + disk_prefix=ex7 00:02:34.237 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:34.237 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:34.237 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:34.237 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:34.237 ++ SPDK_TEST_NVME=1 00:02:34.237 ++ SPDK_TEST_FTL=1 00:02:34.237 ++ SPDK_TEST_ISAL=1 00:02:34.237 ++ SPDK_RUN_ASAN=1 00:02:34.237 ++ SPDK_RUN_UBSAN=1 00:02:34.237 ++ SPDK_TEST_XNVME=1 00:02:34.237 ++ SPDK_TEST_NVME_FDP=1 00:02:34.237 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:34.237 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:34.237 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:34.237 ++ RUN_NIGHTLY=1 00:02:34.237 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:34.237 + nvme_files=() 00:02:34.237 + declare -A nvme_files 00:02:34.237 + backend_dir=/var/lib/libvirt/images/backends 00:02:34.237 + nvme_files['nvme.img']=5G 00:02:34.237 + nvme_files['nvme-cmb.img']=5G 00:02:34.237 + nvme_files['nvme-multi0.img']=4G 00:02:34.237 + nvme_files['nvme-multi1.img']=4G 00:02:34.237 + nvme_files['nvme-multi2.img']=4G 00:02:34.237 + nvme_files['nvme-openstack.img']=8G 00:02:34.237 + nvme_files['nvme-zns.img']=5G 00:02:34.237 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:34.237 + (( SPDK_TEST_FTL == 1 )) 00:02:34.237 + nvme_files["nvme-ftl.img"]=6G 00:02:34.237 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:34.237 + nvme_files["nvme-fdp.img"]=1G 00:02:34.237 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:34.237 + for nvme in "${!nvme_files[@]}" 00:02:34.237 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:34.499 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:34.499 + for nvme in "${!nvme_files[@]}" 00:02:34.499 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:02:35.442 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:35.442 + for nvme in "${!nvme_files[@]}" 00:02:35.442 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:35.443 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:35.443 + for nvme in "${!nvme_files[@]}" 00:02:35.443 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:35.443 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:35.443 + for nvme in "${!nvme_files[@]}" 00:02:35.443 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:35.443 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:35.443 + for nvme in "${!nvme_files[@]}" 00:02:35.443 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:35.705 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:35.705 + for nvme in "${!nvme_files[@]}" 00:02:35.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:36.646 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:36.646 + for nvme in "${!nvme_files[@]}" 00:02:36.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:02:36.646 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:36.646 + for nvme in "${!nvme_files[@]}" 00:02:36.646 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:37.586 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:37.586 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:37.586 + echo 'End stage prepare_nvme.sh' 00:02:37.586 End stage prepare_nvme.sh 00:02:37.597 [Pipeline] sh 00:02:37.874 + DISTRO=fedora39 00:02:37.874 + CPUS=10 00:02:37.874 + RAM=12288 00:02:37.874 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:37.874 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:37.874 00:02:37.874 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:37.874 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:37.874 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:37.874 HELP=0 00:02:37.874 DRY_RUN=0 00:02:37.874 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:02:37.874 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:37.874 NVME_AUTO_CREATE=0 00:02:37.874 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:02:37.874 NVME_CMB=,,,, 00:02:37.874 NVME_PMR=,,,, 00:02:37.874 NVME_ZNS=,,,, 00:02:37.874 NVME_MS=true,,,, 00:02:37.874 NVME_FDP=,,,on, 00:02:37.874 SPDK_VAGRANT_DISTRO=fedora39 00:02:37.874 SPDK_VAGRANT_VMCPU=10 00:02:37.874 SPDK_VAGRANT_VMRAM=12288 00:02:37.874 SPDK_VAGRANT_PROVIDER=libvirt 00:02:37.874 SPDK_VAGRANT_HTTP_PROXY= 00:02:37.874 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:37.874 SPDK_OPENSTACK_NETWORK=0 00:02:37.874 VAGRANT_PACKAGE_BOX=0 00:02:37.874 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:37.874 FORCE_DISTRO=true 00:02:37.874 VAGRANT_BOX_VERSION= 00:02:37.874 EXTRA_VAGRANTFILES= 00:02:37.874 NIC_MODEL=e1000 00:02:37.874 00:02:37.874 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:37.874 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:40.418 Bringing machine 'default' up with 'libvirt' provider... 00:02:40.679 ==> default: Creating image (snapshot of base box volume). 00:02:40.941 ==> default: Creating domain with the following settings... 00:02:40.941 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732164537_f8b1788bdb2b62bb2817 00:02:40.941 ==> default: -- Domain type: kvm 00:02:40.941 ==> default: -- Cpus: 10 00:02:40.941 ==> default: -- Feature: acpi 00:02:40.941 ==> default: -- Feature: apic 00:02:40.941 ==> default: -- Feature: pae 00:02:40.941 ==> default: -- Memory: 12288M 00:02:40.941 ==> default: -- Memory Backing: hugepages: 00:02:40.941 ==> default: -- Management MAC: 00:02:40.941 ==> default: -- Loader: 00:02:40.941 ==> default: -- Nvram: 00:02:40.941 ==> default: -- Base box: spdk/fedora39 00:02:40.941 ==> default: -- Storage pool: default 00:02:40.941 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732164537_f8b1788bdb2b62bb2817.img (20G) 00:02:40.941 ==> default: -- Volume Cache: default 00:02:40.941 ==> default: -- Kernel: 00:02:40.941 ==> default: -- Initrd: 00:02:40.941 ==> default: -- Graphics Type: vnc 00:02:40.941 ==> default: -- Graphics Port: -1 00:02:40.941 ==> default: -- Graphics IP: 127.0.0.1 00:02:40.941 ==> default: -- Graphics Password: Not defined 00:02:40.941 ==> default: -- Video Type: cirrus 00:02:40.941 ==> default: -- Video VRAM: 9216 00:02:40.941 ==> default: -- Sound Type: 00:02:40.941 ==> default: -- Keymap: en-us 00:02:40.941 ==> default: -- TPM Path: 00:02:40.941 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:40.941 ==> default: -- Command line args: 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:40.941 ==> default: -> value=-drive, 00:02:40.941 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:40.941 ==> default: -> value=-drive, 00:02:40.941 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:40.941 ==> default: -> value=-drive, 00:02:40.941 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:40.941 ==> default: -> value=-drive, 00:02:40.941 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:40.941 ==> default: -> value=-drive, 00:02:40.941 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:40.941 ==> default: -> value=-drive, 00:02:40.941 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:40.941 ==> default: -> value=-device, 00:02:40.941 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:41.204 ==> default: Creating shared folders metadata... 00:02:41.204 ==> default: Starting domain. 00:02:43.792 ==> default: Waiting for domain to get an IP address... 00:02:58.733 ==> default: Waiting for SSH to become available... 00:03:00.121 ==> default: Configuring and enabling network interfaces... 00:03:04.327 default: SSH address: 192.168.121.29:22 00:03:04.327 default: SSH username: vagrant 00:03:04.327 default: SSH auth method: private key 00:03:06.872 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:15.022 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:03:20.307 ==> default: Mounting SSHFS shared folder... 00:03:22.849 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:22.849 ==> default: Checking Mount.. 00:03:23.791 ==> default: Folder Successfully Mounted! 00:03:23.791 00:03:23.791 SUCCESS! 00:03:23.791 00:03:23.791 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:23.791 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:23.791 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:23.791 00:03:23.801 [Pipeline] } 00:03:23.815 [Pipeline] // stage 00:03:23.824 [Pipeline] dir 00:03:23.824 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:23.826 [Pipeline] { 00:03:23.837 [Pipeline] catchError 00:03:23.839 [Pipeline] { 00:03:23.851 [Pipeline] sh 00:03:24.133 + vagrant ssh-config --host vagrant 00:03:24.133 + sed -ne '/^Host/,$p' 00:03:24.133 + tee ssh_conf 00:03:26.671 Host vagrant 00:03:26.671 HostName 192.168.121.29 00:03:26.671 User vagrant 00:03:26.671 Port 22 00:03:26.671 UserKnownHostsFile /dev/null 00:03:26.671 StrictHostKeyChecking no 00:03:26.671 PasswordAuthentication no 00:03:26.671 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:26.671 IdentitiesOnly yes 00:03:26.671 LogLevel FATAL 00:03:26.671 ForwardAgent yes 00:03:26.671 ForwardX11 yes 00:03:26.671 00:03:26.687 [Pipeline] withEnv 00:03:26.689 [Pipeline] { 00:03:26.702 [Pipeline] sh 00:03:27.077 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:27.077 source /etc/os-release 00:03:27.077 [[ -e /image.version ]] && img=$(< /image.version) 00:03:27.077 # Minimal, systemd-like check. 00:03:27.077 if [[ -e /.dockerenv ]]; then 00:03:27.077 # Clear garbage from the node'\''s name: 00:03:27.077 # agt-er_autotest_547-896 -> autotest_547-896 00:03:27.077 # $HOSTNAME is the actual container id 00:03:27.077 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:27.077 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:27.077 # We can assume this is a mount from a host where container is running, 00:03:27.077 # so fetch its hostname to easily identify the target swarm worker. 00:03:27.077 container="$(< /etc/hostname) ($agent)" 00:03:27.077 else 00:03:27.077 # Fallback 00:03:27.077 container=$agent 00:03:27.077 fi 00:03:27.077 fi 00:03:27.077 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:27.077 ' 00:03:27.091 [Pipeline] } 00:03:27.107 [Pipeline] // withEnv 00:03:27.115 [Pipeline] setCustomBuildProperty 00:03:27.129 [Pipeline] stage 00:03:27.131 [Pipeline] { (Tests) 00:03:27.147 [Pipeline] sh 00:03:27.429 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:27.704 [Pipeline] sh 00:03:27.986 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:28.264 [Pipeline] timeout 00:03:28.264 Timeout set to expire in 50 min 00:03:28.266 [Pipeline] { 00:03:28.280 [Pipeline] sh 00:03:28.563 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:29.133 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:03:29.147 [Pipeline] sh 00:03:29.431 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:29.708 [Pipeline] sh 00:03:29.991 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:30.271 [Pipeline] sh 00:03:30.556 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:30.816 ++ readlink -f spdk_repo 00:03:30.816 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:30.816 + [[ -n /home/vagrant/spdk_repo ]] 00:03:30.816 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:30.816 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:30.816 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:30.816 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:30.816 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:30.816 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:30.816 + cd /home/vagrant/spdk_repo 00:03:30.816 + source /etc/os-release 00:03:30.816 ++ NAME='Fedora Linux' 00:03:30.816 ++ VERSION='39 (Cloud Edition)' 00:03:30.816 ++ ID=fedora 00:03:30.816 ++ VERSION_ID=39 00:03:30.816 ++ VERSION_CODENAME= 00:03:30.816 ++ PLATFORM_ID=platform:f39 00:03:30.816 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:30.816 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:30.816 ++ LOGO=fedora-logo-icon 00:03:30.816 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:30.816 ++ HOME_URL=https://fedoraproject.org/ 00:03:30.816 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:30.816 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:30.816 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:30.816 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:30.816 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:30.816 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:30.816 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:30.816 ++ SUPPORT_END=2024-11-12 00:03:30.816 ++ VARIANT='Cloud Edition' 00:03:30.816 ++ VARIANT_ID=cloud 00:03:30.816 + uname -a 00:03:30.816 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:30.816 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:31.078 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:31.339 Hugepages 00:03:31.339 node hugesize free / total 00:03:31.339 node0 1048576kB 0 / 0 00:03:31.339 node0 2048kB 0 / 0 00:03:31.339 00:03:31.339 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.600 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:31.600 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:31.600 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:31.600 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:31.600 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:31.600 + rm -f /tmp/spdk-ld-path 00:03:31.600 + source autorun-spdk.conf 00:03:31.600 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:31.600 ++ SPDK_TEST_NVME=1 00:03:31.601 ++ SPDK_TEST_FTL=1 00:03:31.601 ++ SPDK_TEST_ISAL=1 00:03:31.601 ++ SPDK_RUN_ASAN=1 00:03:31.601 ++ SPDK_RUN_UBSAN=1 00:03:31.601 ++ SPDK_TEST_XNVME=1 00:03:31.601 ++ SPDK_TEST_NVME_FDP=1 00:03:31.601 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:31.601 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:31.601 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:31.601 ++ RUN_NIGHTLY=1 00:03:31.601 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:31.601 + [[ -n '' ]] 00:03:31.601 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:31.601 + for M in /var/spdk/build-*-manifest.txt 00:03:31.601 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:31.601 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:31.601 + for M in /var/spdk/build-*-manifest.txt 00:03:31.601 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:31.601 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:31.601 + for M in /var/spdk/build-*-manifest.txt 00:03:31.601 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:31.601 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:31.601 ++ uname 00:03:31.601 + [[ Linux == \L\i\n\u\x ]] 00:03:31.601 + sudo dmesg -T 00:03:31.601 + sudo dmesg --clear 00:03:31.601 + dmesg_pid=5767 00:03:31.601 + [[ Fedora Linux == FreeBSD ]] 00:03:31.601 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:31.601 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:31.601 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:31.601 + [[ -x /usr/src/fio-static/fio ]] 00:03:31.601 + sudo dmesg -Tw 00:03:31.601 + export FIO_BIN=/usr/src/fio-static/fio 00:03:31.601 + FIO_BIN=/usr/src/fio-static/fio 00:03:31.601 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:31.601 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:31.601 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:31.601 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:31.601 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:31.601 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:31.601 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:31.601 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:31.601 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:31.863 04:49:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:31.863 04:49:48 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:31.863 04:49:48 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:03:31.863 04:49:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:31.863 04:49:48 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:31.863 04:49:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:31.863 04:49:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:31.863 04:49:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:31.863 04:49:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:31.863 04:49:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.863 04:49:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.863 04:49:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.863 04:49:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.863 04:49:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.863 04:49:48 -- paths/export.sh@5 -- $ export PATH 00:03:31.863 04:49:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.863 04:49:48 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:31.863 04:49:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:31.863 04:49:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732164588.XXXXXX 00:03:31.863 04:49:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732164588.Jl8mN8 00:03:31.863 04:49:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:31.863 04:49:48 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:03:31.863 04:49:48 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:31.863 04:49:48 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:31.863 04:49:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:31.863 04:49:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:31.863 04:49:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:31.863 04:49:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:31.863 04:49:48 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.863 04:49:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:31.863 04:49:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:31.863 04:49:48 -- pm/common@17 -- $ local monitor 00:03:31.863 04:49:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.863 04:49:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.863 04:49:48 -- pm/common@25 -- $ sleep 1 00:03:31.863 04:49:48 -- pm/common@21 -- $ date +%s 00:03:31.863 04:49:48 -- pm/common@21 -- $ date +%s 00:03:31.863 04:49:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732164588 00:03:31.863 04:49:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732164588 00:03:31.863 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732164588_collect-cpu-load.pm.log 00:03:31.863 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732164588_collect-vmstat.pm.log 00:03:32.806 04:49:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:32.806 04:49:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:32.806 04:49:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:32.806 04:49:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:32.806 04:49:49 -- spdk/autobuild.sh@16 -- $ date -u 00:03:32.806 Thu Nov 21 04:49:49 AM UTC 2024 00:03:32.806 04:49:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:32.806 v25.01-pre-219-g557f022f6 00:03:32.806 04:49:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:32.806 04:49:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:32.806 04:49:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:32.806 04:49:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:32.806 04:49:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.806 ************************************ 00:03:32.806 START TEST asan 00:03:32.806 ************************************ 00:03:32.806 using asan 00:03:32.806 04:49:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:32.806 00:03:32.806 real 0m0.000s 00:03:32.806 user 0m0.000s 00:03:32.806 sys 0m0.000s 00:03:32.806 04:49:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:32.806 ************************************ 00:03:32.806 END TEST asan 00:03:32.806 ************************************ 00:03:32.806 04:49:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:33.068 04:49:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:33.068 04:49:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:33.068 04:49:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:33.068 04:49:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:33.068 04:49:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.068 ************************************ 00:03:33.068 START TEST ubsan 00:03:33.068 ************************************ 00:03:33.068 using ubsan 00:03:33.068 04:49:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:33.068 00:03:33.068 real 0m0.000s 00:03:33.068 user 0m0.000s 00:03:33.068 sys 0m0.000s 00:03:33.068 ************************************ 00:03:33.068 END TEST ubsan 00:03:33.068 ************************************ 00:03:33.068 04:49:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.068 04:49:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:33.068 04:49:49 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:33.068 04:49:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:33.068 04:49:49 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:33.068 04:49:49 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:33.068 04:49:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:33.068 04:49:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.068 ************************************ 00:03:33.068 START TEST build_native_dpdk 00:03:33.068 ************************************ 00:03:33.068 04:49:49 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:33.068 eeb0605f11 version: 23.11.0 00:03:33.068 238778122a doc: update release notes for 23.11 00:03:33.068 46aa6b3cfc doc: fix description of RSS features 00:03:33.068 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:33.068 7e421ae345 devtools: support skipping forbid rule check 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:03:33.068 04:49:49 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:33.068 04:49:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:33.069 patching file config/rte_config.h 00:03:33.069 Hunk #1 succeeded at 60 (offset 1 line). 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:03:33.069 patching file lib/pcapng/rte_pcapng.c 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:33.069 04:49:49 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:03:33.069 04:49:49 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:38.364 The Meson build system 00:03:38.364 Version: 1.5.0 00:03:38.364 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:38.364 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:38.364 Build type: native build 00:03:38.364 Program cat found: YES (/usr/bin/cat) 00:03:38.364 Project name: DPDK 00:03:38.364 Project version: 23.11.0 00:03:38.364 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:38.364 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:38.364 Host machine cpu family: x86_64 00:03:38.364 Host machine cpu: x86_64 00:03:38.364 Message: ## Building in Developer Mode ## 00:03:38.364 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:38.364 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:38.364 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:38.364 Program python3 found: YES (/usr/bin/python3) 00:03:38.364 Program cat found: YES (/usr/bin/cat) 00:03:38.364 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:38.364 Compiler for C supports arguments -march=native: YES 00:03:38.364 Checking for size of "void *" : 8 00:03:38.364 Checking for size of "void *" : 8 (cached) 00:03:38.364 Library m found: YES 00:03:38.364 Library numa found: YES 00:03:38.364 Has header "numaif.h" : YES 00:03:38.364 Library fdt found: NO 00:03:38.364 Library execinfo found: NO 00:03:38.364 Has header "execinfo.h" : YES 00:03:38.364 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:38.364 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:38.364 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:38.364 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:38.364 Run-time dependency openssl found: YES 3.1.1 00:03:38.364 Run-time dependency libpcap found: YES 1.10.4 00:03:38.364 Has header "pcap.h" with dependency libpcap: YES 00:03:38.364 Compiler for C supports arguments -Wcast-qual: YES 00:03:38.364 Compiler for C supports arguments -Wdeprecated: YES 00:03:38.364 Compiler for C supports arguments -Wformat: YES 00:03:38.364 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:38.364 Compiler for C supports arguments -Wformat-security: NO 00:03:38.364 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:38.364 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:38.364 Compiler for C supports arguments -Wnested-externs: YES 00:03:38.364 Compiler for C supports arguments -Wold-style-definition: YES 00:03:38.364 Compiler for C supports arguments -Wpointer-arith: YES 00:03:38.364 Compiler for C supports arguments -Wsign-compare: YES 00:03:38.364 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:38.364 Compiler for C supports arguments -Wundef: YES 00:03:38.364 Compiler for C supports arguments -Wwrite-strings: YES 00:03:38.364 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:38.364 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:38.364 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:38.365 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:38.365 Program objdump found: YES (/usr/bin/objdump) 00:03:38.365 Compiler for C supports arguments -mavx512f: YES 00:03:38.365 Checking if "AVX512 checking" compiles: YES 00:03:38.365 Fetching value of define "__SSE4_2__" : 1 00:03:38.365 Fetching value of define "__AES__" : 1 00:03:38.365 Fetching value of define "__AVX__" : 1 00:03:38.365 Fetching value of define "__AVX2__" : 1 00:03:38.365 Fetching value of define "__AVX512BW__" : 1 00:03:38.365 Fetching value of define "__AVX512CD__" : 1 00:03:38.365 Fetching value of define "__AVX512DQ__" : 1 00:03:38.365 Fetching value of define "__AVX512F__" : 1 00:03:38.365 Fetching value of define "__AVX512VL__" : 1 00:03:38.365 Fetching value of define "__PCLMUL__" : 1 00:03:38.365 Fetching value of define "__RDRND__" : 1 00:03:38.365 Fetching value of define "__RDSEED__" : 1 00:03:38.365 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:38.365 Fetching value of define "__znver1__" : (undefined) 00:03:38.365 Fetching value of define "__znver2__" : (undefined) 00:03:38.365 Fetching value of define "__znver3__" : (undefined) 00:03:38.365 Fetching value of define "__znver4__" : (undefined) 00:03:38.365 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:38.365 Message: lib/log: Defining dependency "log" 00:03:38.365 Message: lib/kvargs: Defining dependency "kvargs" 00:03:38.365 Message: lib/telemetry: Defining dependency "telemetry" 00:03:38.365 Checking for function "getentropy" : NO 00:03:38.365 Message: lib/eal: Defining dependency "eal" 00:03:38.365 Message: lib/ring: Defining dependency "ring" 00:03:38.365 Message: lib/rcu: Defining dependency "rcu" 00:03:38.365 Message: lib/mempool: Defining dependency "mempool" 00:03:38.365 Message: lib/mbuf: Defining dependency "mbuf" 00:03:38.365 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:38.365 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:38.365 Compiler for C supports arguments -mpclmul: YES 00:03:38.365 Compiler for C supports arguments -maes: YES 00:03:38.365 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:38.365 Compiler for C supports arguments -mavx512bw: YES 00:03:38.365 Compiler for C supports arguments -mavx512dq: YES 00:03:38.365 Compiler for C supports arguments -mavx512vl: YES 00:03:38.365 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:38.365 Compiler for C supports arguments -mavx2: YES 00:03:38.365 Compiler for C supports arguments -mavx: YES 00:03:38.365 Message: lib/net: Defining dependency "net" 00:03:38.365 Message: lib/meter: Defining dependency "meter" 00:03:38.365 Message: lib/ethdev: Defining dependency "ethdev" 00:03:38.365 Message: lib/pci: Defining dependency "pci" 00:03:38.365 Message: lib/cmdline: Defining dependency "cmdline" 00:03:38.365 Message: lib/metrics: Defining dependency "metrics" 00:03:38.365 Message: lib/hash: Defining dependency "hash" 00:03:38.365 Message: lib/timer: Defining dependency "timer" 00:03:38.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:38.365 Message: lib/acl: Defining dependency "acl" 00:03:38.365 Message: lib/bbdev: Defining dependency "bbdev" 00:03:38.365 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:38.365 Run-time dependency libelf found: YES 0.191 00:03:38.365 Message: lib/bpf: Defining dependency "bpf" 00:03:38.365 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:38.365 Message: lib/compressdev: Defining dependency "compressdev" 00:03:38.365 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:38.365 Message: lib/distributor: Defining dependency "distributor" 00:03:38.365 Message: lib/dmadev: Defining dependency "dmadev" 00:03:38.365 Message: lib/efd: Defining dependency "efd" 00:03:38.365 Message: lib/eventdev: Defining dependency "eventdev" 00:03:38.365 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:38.365 Message: lib/gpudev: Defining dependency "gpudev" 00:03:38.365 Message: lib/gro: Defining dependency "gro" 00:03:38.365 Message: lib/gso: Defining dependency "gso" 00:03:38.365 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:38.365 Message: lib/jobstats: Defining dependency "jobstats" 00:03:38.365 Message: lib/latencystats: Defining dependency "latencystats" 00:03:38.365 Message: lib/lpm: Defining dependency "lpm" 00:03:38.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512IFMA__" : 1 00:03:38.365 Message: lib/member: Defining dependency "member" 00:03:38.365 Message: lib/pcapng: Defining dependency "pcapng" 00:03:38.365 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:38.365 Message: lib/power: Defining dependency "power" 00:03:38.365 Message: lib/rawdev: Defining dependency "rawdev" 00:03:38.365 Message: lib/regexdev: Defining dependency "regexdev" 00:03:38.365 Message: lib/mldev: Defining dependency "mldev" 00:03:38.365 Message: lib/rib: Defining dependency "rib" 00:03:38.365 Message: lib/reorder: Defining dependency "reorder" 00:03:38.365 Message: lib/sched: Defining dependency "sched" 00:03:38.365 Message: lib/security: Defining dependency "security" 00:03:38.365 Message: lib/stack: Defining dependency "stack" 00:03:38.365 Has header "linux/userfaultfd.h" : YES 00:03:38.365 Has header "linux/vduse.h" : YES 00:03:38.365 Message: lib/vhost: Defining dependency "vhost" 00:03:38.365 Message: lib/ipsec: Defining dependency "ipsec" 00:03:38.365 Message: lib/pdcp: Defining dependency "pdcp" 00:03:38.365 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:38.365 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:38.365 Message: lib/fib: Defining dependency "fib" 00:03:38.365 Message: lib/port: Defining dependency "port" 00:03:38.365 Message: lib/pdump: Defining dependency "pdump" 00:03:38.365 Message: lib/table: Defining dependency "table" 00:03:38.365 Message: lib/pipeline: Defining dependency "pipeline" 00:03:38.365 Message: lib/graph: Defining dependency "graph" 00:03:38.365 Message: lib/node: Defining dependency "node" 00:03:38.365 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:38.365 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:38.365 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:38.365 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:38.937 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:38.937 Compiler for C supports arguments -Wno-unused-value: YES 00:03:38.937 Compiler for C supports arguments -Wno-format: YES 00:03:38.937 Compiler for C supports arguments -Wno-format-security: YES 00:03:38.937 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:38.937 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:38.937 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:38.937 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:38.937 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:38.937 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:38.937 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:38.937 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:38.937 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:38.937 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:38.937 Has header "sys/epoll.h" : YES 00:03:38.937 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:38.937 Configuring doxy-api-html.conf using configuration 00:03:38.937 Configuring doxy-api-man.conf using configuration 00:03:38.937 Program mandb found: YES (/usr/bin/mandb) 00:03:38.937 Program sphinx-build found: NO 00:03:38.937 Configuring rte_build_config.h using configuration 00:03:38.937 Message: 00:03:38.937 ================= 00:03:38.937 Applications Enabled 00:03:38.937 ================= 00:03:38.937 00:03:38.937 apps: 00:03:38.937 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:38.937 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:38.937 test-pmd, test-regex, test-sad, test-security-perf, 00:03:38.938 00:03:38.938 Message: 00:03:38.938 ================= 00:03:38.938 Libraries Enabled 00:03:38.938 ================= 00:03:38.938 00:03:38.938 libs: 00:03:38.938 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:38.938 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:38.938 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:38.938 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:38.938 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:38.938 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:38.938 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:38.938 00:03:38.938 00:03:38.938 Message: 00:03:38.938 =============== 00:03:38.938 Drivers Enabled 00:03:38.938 =============== 00:03:38.938 00:03:38.938 common: 00:03:38.938 00:03:38.938 bus: 00:03:38.938 pci, vdev, 00:03:38.938 mempool: 00:03:38.938 ring, 00:03:38.938 dma: 00:03:38.938 00:03:38.938 net: 00:03:38.938 i40e, 00:03:38.938 raw: 00:03:38.938 00:03:38.938 crypto: 00:03:38.938 00:03:38.938 compress: 00:03:38.938 00:03:38.938 regex: 00:03:38.938 00:03:38.938 ml: 00:03:38.938 00:03:38.938 vdpa: 00:03:38.938 00:03:38.938 event: 00:03:38.938 00:03:38.938 baseband: 00:03:38.938 00:03:38.938 gpu: 00:03:38.938 00:03:38.938 00:03:38.938 Message: 00:03:38.938 ================= 00:03:38.938 Content Skipped 00:03:38.938 ================= 00:03:38.938 00:03:38.938 apps: 00:03:38.938 00:03:38.938 libs: 00:03:38.938 00:03:38.938 drivers: 00:03:38.938 common/cpt: not in enabled drivers build config 00:03:38.938 common/dpaax: not in enabled drivers build config 00:03:38.938 common/iavf: not in enabled drivers build config 00:03:38.938 common/idpf: not in enabled drivers build config 00:03:38.938 common/mvep: not in enabled drivers build config 00:03:38.938 common/octeontx: not in enabled drivers build config 00:03:38.938 bus/auxiliary: not in enabled drivers build config 00:03:38.938 bus/cdx: not in enabled drivers build config 00:03:38.938 bus/dpaa: not in enabled drivers build config 00:03:38.938 bus/fslmc: not in enabled drivers build config 00:03:38.938 bus/ifpga: not in enabled drivers build config 00:03:38.938 bus/platform: not in enabled drivers build config 00:03:38.938 bus/vmbus: not in enabled drivers build config 00:03:38.938 common/cnxk: not in enabled drivers build config 00:03:38.938 common/mlx5: not in enabled drivers build config 00:03:38.938 common/nfp: not in enabled drivers build config 00:03:38.938 common/qat: not in enabled drivers build config 00:03:38.938 common/sfc_efx: not in enabled drivers build config 00:03:38.938 mempool/bucket: not in enabled drivers build config 00:03:38.938 mempool/cnxk: not in enabled drivers build config 00:03:38.938 mempool/dpaa: not in enabled drivers build config 00:03:38.938 mempool/dpaa2: not in enabled drivers build config 00:03:38.938 mempool/octeontx: not in enabled drivers build config 00:03:38.938 mempool/stack: not in enabled drivers build config 00:03:38.938 dma/cnxk: not in enabled drivers build config 00:03:38.938 dma/dpaa: not in enabled drivers build config 00:03:38.938 dma/dpaa2: not in enabled drivers build config 00:03:38.938 dma/hisilicon: not in enabled drivers build config 00:03:38.938 dma/idxd: not in enabled drivers build config 00:03:38.938 dma/ioat: not in enabled drivers build config 00:03:38.938 dma/skeleton: not in enabled drivers build config 00:03:38.938 net/af_packet: not in enabled drivers build config 00:03:38.938 net/af_xdp: not in enabled drivers build config 00:03:38.938 net/ark: not in enabled drivers build config 00:03:38.938 net/atlantic: not in enabled drivers build config 00:03:38.938 net/avp: not in enabled drivers build config 00:03:38.938 net/axgbe: not in enabled drivers build config 00:03:38.938 net/bnx2x: not in enabled drivers build config 00:03:38.938 net/bnxt: not in enabled drivers build config 00:03:38.938 net/bonding: not in enabled drivers build config 00:03:38.938 net/cnxk: not in enabled drivers build config 00:03:38.938 net/cpfl: not in enabled drivers build config 00:03:38.938 net/cxgbe: not in enabled drivers build config 00:03:38.938 net/dpaa: not in enabled drivers build config 00:03:38.938 net/dpaa2: not in enabled drivers build config 00:03:38.938 net/e1000: not in enabled drivers build config 00:03:38.938 net/ena: not in enabled drivers build config 00:03:38.938 net/enetc: not in enabled drivers build config 00:03:38.938 net/enetfec: not in enabled drivers build config 00:03:38.938 net/enic: not in enabled drivers build config 00:03:38.938 net/failsafe: not in enabled drivers build config 00:03:38.938 net/fm10k: not in enabled drivers build config 00:03:38.938 net/gve: not in enabled drivers build config 00:03:38.938 net/hinic: not in enabled drivers build config 00:03:38.938 net/hns3: not in enabled drivers build config 00:03:38.938 net/iavf: not in enabled drivers build config 00:03:38.938 net/ice: not in enabled drivers build config 00:03:38.938 net/idpf: not in enabled drivers build config 00:03:38.938 net/igc: not in enabled drivers build config 00:03:38.938 net/ionic: not in enabled drivers build config 00:03:38.938 net/ipn3ke: not in enabled drivers build config 00:03:38.938 net/ixgbe: not in enabled drivers build config 00:03:38.938 net/mana: not in enabled drivers build config 00:03:38.938 net/memif: not in enabled drivers build config 00:03:38.938 net/mlx4: not in enabled drivers build config 00:03:38.938 net/mlx5: not in enabled drivers build config 00:03:38.938 net/mvneta: not in enabled drivers build config 00:03:38.938 net/mvpp2: not in enabled drivers build config 00:03:38.938 net/netvsc: not in enabled drivers build config 00:03:38.938 net/nfb: not in enabled drivers build config 00:03:38.938 net/nfp: not in enabled drivers build config 00:03:38.938 net/ngbe: not in enabled drivers build config 00:03:38.938 net/null: not in enabled drivers build config 00:03:38.938 net/octeontx: not in enabled drivers build config 00:03:38.938 net/octeon_ep: not in enabled drivers build config 00:03:38.938 net/pcap: not in enabled drivers build config 00:03:38.938 net/pfe: not in enabled drivers build config 00:03:38.938 net/qede: not in enabled drivers build config 00:03:38.938 net/ring: not in enabled drivers build config 00:03:38.938 net/sfc: not in enabled drivers build config 00:03:38.938 net/softnic: not in enabled drivers build config 00:03:38.938 net/tap: not in enabled drivers build config 00:03:38.938 net/thunderx: not in enabled drivers build config 00:03:38.938 net/txgbe: not in enabled drivers build config 00:03:38.938 net/vdev_netvsc: not in enabled drivers build config 00:03:38.938 net/vhost: not in enabled drivers build config 00:03:38.938 net/virtio: not in enabled drivers build config 00:03:38.938 net/vmxnet3: not in enabled drivers build config 00:03:38.938 raw/cnxk_bphy: not in enabled drivers build config 00:03:38.938 raw/cnxk_gpio: not in enabled drivers build config 00:03:38.938 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:38.938 raw/ifpga: not in enabled drivers build config 00:03:38.938 raw/ntb: not in enabled drivers build config 00:03:38.938 raw/skeleton: not in enabled drivers build config 00:03:38.938 crypto/armv8: not in enabled drivers build config 00:03:38.938 crypto/bcmfs: not in enabled drivers build config 00:03:38.938 crypto/caam_jr: not in enabled drivers build config 00:03:38.938 crypto/ccp: not in enabled drivers build config 00:03:38.938 crypto/cnxk: not in enabled drivers build config 00:03:38.938 crypto/dpaa_sec: not in enabled drivers build config 00:03:38.938 crypto/dpaa2_sec: not in enabled drivers build config 00:03:38.938 crypto/ipsec_mb: not in enabled drivers build config 00:03:38.938 crypto/mlx5: not in enabled drivers build config 00:03:38.938 crypto/mvsam: not in enabled drivers build config 00:03:38.938 crypto/nitrox: not in enabled drivers build config 00:03:38.938 crypto/null: not in enabled drivers build config 00:03:38.938 crypto/octeontx: not in enabled drivers build config 00:03:38.938 crypto/openssl: not in enabled drivers build config 00:03:38.938 crypto/scheduler: not in enabled drivers build config 00:03:38.938 crypto/uadk: not in enabled drivers build config 00:03:38.938 crypto/virtio: not in enabled drivers build config 00:03:38.938 compress/isal: not in enabled drivers build config 00:03:38.938 compress/mlx5: not in enabled drivers build config 00:03:38.938 compress/octeontx: not in enabled drivers build config 00:03:38.938 compress/zlib: not in enabled drivers build config 00:03:38.938 regex/mlx5: not in enabled drivers build config 00:03:38.939 regex/cn9k: not in enabled drivers build config 00:03:38.939 ml/cnxk: not in enabled drivers build config 00:03:38.939 vdpa/ifc: not in enabled drivers build config 00:03:38.939 vdpa/mlx5: not in enabled drivers build config 00:03:38.939 vdpa/nfp: not in enabled drivers build config 00:03:38.939 vdpa/sfc: not in enabled drivers build config 00:03:38.939 event/cnxk: not in enabled drivers build config 00:03:38.939 event/dlb2: not in enabled drivers build config 00:03:38.939 event/dpaa: not in enabled drivers build config 00:03:38.939 event/dpaa2: not in enabled drivers build config 00:03:38.939 event/dsw: not in enabled drivers build config 00:03:38.939 event/opdl: not in enabled drivers build config 00:03:38.939 event/skeleton: not in enabled drivers build config 00:03:38.939 event/sw: not in enabled drivers build config 00:03:38.939 event/octeontx: not in enabled drivers build config 00:03:38.939 baseband/acc: not in enabled drivers build config 00:03:38.939 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:38.939 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:38.939 baseband/la12xx: not in enabled drivers build config 00:03:38.939 baseband/null: not in enabled drivers build config 00:03:38.939 baseband/turbo_sw: not in enabled drivers build config 00:03:38.939 gpu/cuda: not in enabled drivers build config 00:03:38.939 00:03:38.939 00:03:38.939 Build targets in project: 215 00:03:38.939 00:03:38.939 DPDK 23.11.0 00:03:38.939 00:03:38.939 User defined options 00:03:38.939 libdir : lib 00:03:38.939 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:38.939 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:38.939 c_link_args : 00:03:38.939 enable_docs : false 00:03:38.939 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:38.939 enable_kmods : false 00:03:38.939 machine : native 00:03:38.939 tests : false 00:03:38.939 00:03:38.939 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:38.939 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:39.201 04:49:55 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:39.201 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:39.201 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:39.201 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:39.201 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:39.201 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:39.201 [5/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:39.201 [6/705] Linking static target lib/librte_kvargs.a 00:03:39.463 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:39.463 [8/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:39.463 [9/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:39.463 [10/705] Linking static target lib/librte_log.a 00:03:39.463 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:39.463 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.463 [13/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:39.463 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:39.723 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:39.723 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:39.723 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:39.723 [18/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.723 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:39.984 [20/705] Linking target lib/librte_log.so.24.0 00:03:39.984 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:39.984 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:39.984 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:39.984 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:39.984 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:40.243 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:40.243 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:40.243 [28/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:40.243 [29/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:40.243 [30/705] Linking static target lib/librte_telemetry.a 00:03:40.243 [31/705] Linking target lib/librte_kvargs.so.24.0 00:03:40.243 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:40.243 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:40.243 [34/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:40.243 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:40.243 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:40.243 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:40.501 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:40.501 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:40.501 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:40.501 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:40.501 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.501 [43/705] Linking target lib/librte_telemetry.so.24.0 00:03:40.501 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:40.759 [45/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:40.759 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:40.759 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:40.759 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:40.759 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:40.759 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:40.759 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:40.759 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:41.017 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:41.017 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:41.017 [55/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:41.017 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:41.017 [57/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:41.017 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:41.017 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:41.017 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:41.017 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:41.017 [62/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:41.017 [63/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:41.017 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:41.275 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:41.275 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:41.275 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:41.275 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:41.533 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:41.533 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:41.533 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:41.533 [72/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:41.533 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:41.534 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:41.534 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:41.534 [76/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:41.534 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:41.534 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:41.792 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:41.792 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:41.792 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:41.792 [82/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:41.792 [83/705] Linking static target lib/librte_ring.a 00:03:41.792 [84/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:41.792 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:41.792 [86/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:41.792 [87/705] Linking static target lib/librte_eal.a 00:03:42.050 [88/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:42.050 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:42.050 [90/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.050 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:42.050 [92/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:42.309 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:42.309 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:42.309 [95/705] Linking static target lib/librte_mempool.a 00:03:42.309 [96/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:42.309 [97/705] Linking static target lib/librte_rcu.a 00:03:42.309 [98/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:42.309 [99/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:42.309 [100/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:42.309 [101/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:42.309 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:42.567 [103/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.567 [104/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:42.567 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:42.567 [106/705] Linking static target lib/librte_meter.a 00:03:42.567 [107/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:42.567 [108/705] Linking static target lib/librte_net.a 00:03:42.567 [109/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:42.825 [110/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:42.826 [111/705] Linking static target lib/librte_mbuf.a 00:03:42.826 [112/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.826 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:42.826 [114/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.826 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:42.826 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.826 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:43.083 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.083 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:43.341 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:43.341 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:43.341 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:43.341 [123/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:43.341 [124/705] Linking static target lib/librte_pci.a 00:03:43.599 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:43.599 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:43.599 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:43.599 [128/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:43.599 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:43.599 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:43.599 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.599 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:43.599 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:43.599 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:43.599 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:43.858 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:43.858 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:43.858 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:43.858 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:43.858 [140/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:43.858 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:43.858 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:43.858 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:43.858 [144/705] Linking static target lib/librte_cmdline.a 00:03:44.116 [145/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:44.116 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:44.116 [147/705] Linking static target lib/librte_metrics.a 00:03:44.116 [148/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:44.116 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:44.375 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.375 [151/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:44.375 [152/705] Linking static target lib/librte_timer.a 00:03:44.375 [153/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:44.633 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.633 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.633 [156/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:44.892 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:44.892 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:44.892 [159/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:45.150 [160/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:45.150 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:45.150 [162/705] Linking static target lib/librte_bitratestats.a 00:03:45.150 [163/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:45.150 [164/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:45.408 [165/705] Linking static target lib/librte_bbdev.a 00:03:45.408 [166/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.408 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:45.666 [168/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:45.666 [169/705] Linking static target lib/librte_hash.a 00:03:45.666 [170/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:45.666 [171/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:45.666 [172/705] Linking static target lib/acl/libavx2_tmp.a 00:03:45.666 [173/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.666 [174/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:45.924 [175/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:45.924 [176/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:45.924 [177/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.924 [178/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:46.183 [179/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:46.183 [180/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.183 [181/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:46.183 [182/705] Linking static target lib/librte_cfgfile.a 00:03:46.183 [183/705] Linking target lib/librte_eal.so.24.0 00:03:46.183 [184/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:46.183 [185/705] Linking target lib/librte_ring.so.24.0 00:03:46.183 [186/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:46.183 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:46.183 [188/705] Linking target lib/librte_meter.so.24.0 00:03:46.183 [189/705] Linking target lib/librte_pci.so.24.0 00:03:46.442 [190/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.442 [191/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:46.442 [192/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:46.442 [193/705] Linking target lib/librte_timer.so.24.0 00:03:46.442 [194/705] Linking target lib/librte_rcu.so.24.0 00:03:46.442 [195/705] Linking target lib/librte_mempool.so.24.0 00:03:46.442 [196/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:46.442 [197/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:46.442 [198/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:46.442 [199/705] Linking target lib/librte_cfgfile.so.24.0 00:03:46.442 [200/705] Linking static target lib/librte_ethdev.a 00:03:46.442 [201/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:46.442 [202/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:46.442 [203/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:46.442 [204/705] Linking static target lib/librte_compressdev.a 00:03:46.442 [205/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:46.442 [206/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:46.442 [207/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:46.442 [208/705] Linking static target lib/librte_acl.a 00:03:46.442 [209/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:46.442 [210/705] Linking static target lib/librte_bpf.a 00:03:46.442 [211/705] Linking target lib/librte_mbuf.so.24.0 00:03:46.701 [212/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:46.701 [213/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:46.701 [214/705] Linking target lib/librte_net.so.24.0 00:03:46.701 [215/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.701 [216/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.701 [217/705] Linking target lib/librte_bbdev.so.24.0 00:03:46.701 [218/705] Linking target lib/librte_acl.so.24.0 00:03:46.701 [219/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.701 [220/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:46.701 [221/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:46.701 [222/705] Linking target lib/librte_compressdev.so.24.0 00:03:46.701 [223/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:46.701 [224/705] Linking target lib/librte_cmdline.so.24.0 00:03:46.701 [225/705] Linking target lib/librte_hash.so.24.0 00:03:46.959 [226/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:46.959 [227/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:46.959 [228/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:46.959 [229/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:46.959 [230/705] Linking static target lib/librte_distributor.a 00:03:46.960 [231/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:47.218 [232/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.218 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:47.218 [234/705] Linking target lib/librte_distributor.so.24.0 00:03:47.218 [235/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:47.477 [236/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:47.477 [237/705] Linking static target lib/librte_dmadev.a 00:03:47.477 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:47.477 [239/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:47.736 [240/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.736 [241/705] Linking target lib/librte_dmadev.so.24.0 00:03:47.736 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:47.736 [243/705] Linking static target lib/librte_efd.a 00:03:47.736 [244/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:47.736 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:47.995 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.995 [247/705] Linking target lib/librte_efd.so.24.0 00:03:47.995 [248/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:47.995 [249/705] Linking static target lib/librte_dispatcher.a 00:03:48.254 [250/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:48.254 [251/705] Linking static target lib/librte_cryptodev.a 00:03:48.254 [252/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:48.254 [253/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:48.254 [254/705] Linking static target lib/librte_gpudev.a 00:03:48.254 [255/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:48.254 [256/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.254 [257/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:48.512 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:48.771 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:48.771 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:48.771 [261/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:48.771 [262/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:48.771 [263/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.771 [264/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:48.771 [265/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:48.772 [266/705] Linking static target lib/librte_gro.a 00:03:48.772 [267/705] Linking target lib/librte_gpudev.so.24.0 00:03:49.030 [268/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:49.030 [269/705] Linking static target lib/librte_eventdev.a 00:03:49.030 [270/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:49.030 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:49.030 [272/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.030 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:49.030 [274/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.030 [275/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:49.289 [276/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:49.289 [277/705] Linking target lib/librte_cryptodev.so.24.0 00:03:49.289 [278/705] Linking static target lib/librte_gso.a 00:03:49.289 [279/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:49.289 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:49.289 [281/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.289 [282/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:49.289 [283/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:49.548 [284/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:49.548 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:49.548 [286/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:49.548 [287/705] Linking static target lib/librte_jobstats.a 00:03:49.548 [288/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:49.548 [289/705] Linking static target lib/librte_ip_frag.a 00:03:49.806 [290/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:49.806 [291/705] Linking static target lib/librte_latencystats.a 00:03:49.806 [292/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:49.806 [293/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.806 [294/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.806 [295/705] Linking target lib/librte_jobstats.so.24.0 00:03:49.806 [296/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.806 [297/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:49.806 [298/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:50.065 [299/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.065 [300/705] Linking target lib/librte_ethdev.so.24.0 00:03:50.065 [301/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:50.065 [302/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:50.065 [303/705] Linking target lib/librte_metrics.so.24.0 00:03:50.065 [304/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:50.325 [305/705] Linking target lib/librte_bpf.so.24.0 00:03:50.325 [306/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:50.325 [307/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:50.325 [308/705] Linking target lib/librte_gro.so.24.0 00:03:50.325 [309/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:50.325 [310/705] Linking target lib/librte_gso.so.24.0 00:03:50.325 [311/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:50.325 [312/705] Linking target lib/librte_bitratestats.so.24.0 00:03:50.325 [313/705] Linking target lib/librte_ip_frag.so.24.0 00:03:50.325 [314/705] Linking static target lib/librte_lpm.a 00:03:50.325 [315/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:50.325 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:50.325 [317/705] Linking target lib/librte_latencystats.so.24.0 00:03:50.325 [318/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:50.325 [319/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:50.325 [320/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:50.325 [321/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:50.325 [322/705] Linking static target lib/librte_pcapng.a 00:03:50.583 [323/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.583 [324/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.583 [325/705] Linking target lib/librte_eventdev.so.24.0 00:03:50.583 [326/705] Linking target lib/librte_lpm.so.24.0 00:03:50.583 [327/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.583 [328/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:50.583 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:50.583 [330/705] Linking target lib/librte_pcapng.so.24.0 00:03:50.583 [331/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:50.583 [332/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:50.583 [333/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:50.583 [334/705] Linking target lib/librte_dispatcher.so.24.0 00:03:50.583 [335/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:50.856 [336/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:50.856 [337/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:50.856 [338/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:50.856 [339/705] Linking static target lib/librte_power.a 00:03:50.856 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:50.856 [341/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:51.114 [342/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:51.114 [343/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:51.114 [344/705] Linking static target lib/librte_regexdev.a 00:03:51.114 [345/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:51.114 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:51.114 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:51.114 [348/705] Linking static target lib/librte_rawdev.a 00:03:51.114 [349/705] Linking static target lib/librte_mldev.a 00:03:51.114 [350/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:51.114 [351/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:51.114 [352/705] Linking static target lib/librte_member.a 00:03:51.373 [353/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.373 [354/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:51.373 [355/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:51.373 [356/705] Linking target lib/librte_power.so.24.0 00:03:51.373 [357/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:51.373 [358/705] Linking static target lib/librte_reorder.a 00:03:51.373 [359/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.373 [360/705] Linking target lib/librte_rawdev.so.24.0 00:03:51.373 [361/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.373 [362/705] Linking target lib/librte_member.so.24.0 00:03:51.631 [363/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:51.631 [364/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.631 [365/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:51.631 [366/705] Linking static target lib/librte_rib.a 00:03:51.631 [367/705] Linking target lib/librte_regexdev.so.24.0 00:03:51.631 [368/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:51.631 [369/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.631 [370/705] Linking target lib/librte_reorder.so.24.0 00:03:51.631 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:51.631 [372/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:51.631 [373/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:51.631 [374/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:51.631 [375/705] Linking static target lib/librte_security.a 00:03:51.890 [376/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:51.890 [377/705] Linking static target lib/librte_stack.a 00:03:51.890 [378/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.890 [379/705] Linking target lib/librte_rib.so.24.0 00:03:51.890 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.890 [381/705] Linking target lib/librte_stack.so.24.0 00:03:51.890 [382/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:51.890 [383/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.890 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:51.890 [385/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:51.890 [386/705] Linking target lib/librte_mldev.so.24.0 00:03:52.148 [387/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.148 [388/705] Linking target lib/librte_security.so.24.0 00:03:52.148 [389/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:52.148 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:52.148 [391/705] Linking static target lib/librte_sched.a 00:03:52.148 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:52.407 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:52.407 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:52.407 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.407 [396/705] Linking target lib/librte_sched.so.24.0 00:03:52.407 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:52.684 [398/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:52.684 [399/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:52.684 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:52.684 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:52.947 [402/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:52.947 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:52.947 [404/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:52.947 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:52.947 [406/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:53.206 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:53.206 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:53.206 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:53.206 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:53.206 [411/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:53.465 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:53.465 [413/705] Linking static target lib/librte_ipsec.a 00:03:53.465 [414/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:53.465 [415/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:53.724 [416/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.724 [417/705] Linking target lib/librte_ipsec.so.24.0 00:03:53.724 [418/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:53.724 [419/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:53.724 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:53.724 [421/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:53.724 [422/705] Linking static target lib/librte_fib.a 00:03:53.982 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:53.982 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:53.982 [425/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.982 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:53.982 [427/705] Linking target lib/librte_fib.so.24.0 00:03:53.982 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:53.982 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:54.241 [430/705] Linking static target lib/librte_pdcp.a 00:03:54.241 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.241 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:54.500 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:54.500 [434/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:54.500 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:54.500 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:54.500 [437/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:54.759 [438/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:54.759 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:54.759 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:55.018 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:55.018 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:55.018 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:55.018 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:55.018 [445/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:55.018 [446/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:55.018 [447/705] Linking static target lib/librte_pdump.a 00:03:55.018 [448/705] Linking static target lib/librte_port.a 00:03:55.018 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:55.018 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:55.277 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:55.277 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.277 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:55.537 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.537 [455/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:55.537 [456/705] Linking target lib/librte_port.so.24.0 00:03:55.537 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:55.537 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:55.537 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:55.537 [460/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:55.537 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:55.537 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:55.795 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:55.795 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:55.795 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:55.795 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:55.795 [467/705] Linking static target lib/librte_table.a 00:03:56.053 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:56.053 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:56.312 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.312 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:56.312 [472/705] Linking target lib/librte_table.so.24.0 00:03:56.312 [473/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:56.312 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:56.312 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:56.570 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:56.570 [477/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:56.570 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:56.570 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:56.570 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:56.828 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:57.087 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:57.087 [483/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:57.087 [484/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:57.087 [485/705] Linking static target lib/librte_graph.a 00:03:57.087 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:57.087 [487/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:57.346 [488/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:57.346 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:57.346 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.346 [491/705] Linking target lib/librte_graph.so.24.0 00:03:57.604 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:57.604 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:57.604 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:57.604 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:57.604 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:57.862 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:57.862 [498/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:57.862 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:57.862 [500/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:57.862 [501/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:57.862 [502/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:58.121 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:58.121 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:58.121 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:58.121 [506/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:58.121 [507/705] Linking static target lib/librte_node.a 00:03:58.121 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:58.121 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:58.121 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:58.379 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.379 [512/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:58.379 [513/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:58.379 [514/705] Linking target lib/librte_node.so.24.0 00:03:58.379 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:58.379 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:58.379 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:58.379 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:58.380 [519/705] Linking static target drivers/librte_bus_pci.a 00:03:58.638 [520/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:58.638 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:58.638 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:58.638 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:58.638 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:58.638 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:58.638 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:58.638 [527/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.638 [528/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:58.897 [529/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:58.897 [530/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.897 [531/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:58.897 [532/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:58.897 [533/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:58.897 [534/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:58.897 [535/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:59.156 [536/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:59.156 [537/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:59.156 [538/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:59.156 [539/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.156 [540/705] Linking static target drivers/librte_mempool_ring.a 00:03:59.156 [541/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:59.156 [542/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:59.415 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:59.674 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:59.674 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:59.933 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:04:00.191 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:04:00.191 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:04:00.191 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:04:00.191 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:04:00.449 [551/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:04:00.449 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:04:00.450 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:04:00.450 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:04:00.450 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:04:00.708 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:04:00.708 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:04:00.708 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:04:00.978 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:04:00.978 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:04:01.351 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:04:01.351 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:04:01.351 [563/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:04:01.351 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:04:01.351 [565/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:04:01.610 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:04:01.610 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:04:01.610 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:04:01.610 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:04:01.610 [570/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:04:01.610 [571/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:04:01.869 [572/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:04:01.869 [573/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:04:01.869 [574/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:04:01.869 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:04:01.869 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:04:02.128 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:04:02.128 [578/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:04:02.386 [579/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:04:02.386 [580/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:04:02.387 [581/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:04:02.644 [582/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:04:02.644 [583/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:04:02.903 [584/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:04:02.903 [585/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:04:02.903 [586/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:04:02.903 [587/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:04:02.903 [588/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:04:02.903 [589/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:04:02.903 [590/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:04:02.903 [591/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:03.161 [592/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:04:03.161 [593/705] Linking static target drivers/librte_net_i40e.a 00:04:03.161 [594/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:03.161 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:04:03.161 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:04:03.419 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:04:03.419 [598/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.419 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:04:03.419 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:04:03.419 [601/705] Linking target drivers/librte_net_i40e.so.24.0 00:04:03.419 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:04:03.677 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:04:03.677 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:04:03.677 [605/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:04:03.677 [606/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:04:03.937 [607/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:04:03.937 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:04:03.937 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:04:03.937 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:04:03.937 [611/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:04:04.195 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:04:04.195 [613/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:04:04.195 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:04:04.763 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:04:04.763 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:04:04.763 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:04:04.763 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:04:05.021 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:04:05.021 [620/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:04:05.021 [621/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:04:05.021 [622/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:04:05.280 [623/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:04:05.280 [624/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:04:05.280 [625/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:04:05.280 [626/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:04:05.280 [627/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:04:05.280 [628/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:05.280 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:04:05.539 [630/705] Linking static target lib/librte_vhost.a 00:04:05.539 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:04:05.539 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:04:05.539 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:04:05.539 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:04:05.539 [635/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:04:05.797 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:04:05.797 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:04:05.797 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:04:06.055 [639/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:04:06.055 [640/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:04:06.055 [641/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:04:06.055 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:04:06.055 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:04:06.313 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:06.313 [645/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.313 [646/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:06.313 [647/705] Linking target lib/librte_vhost.so.24.0 00:04:06.313 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:04:06.313 [649/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:04:06.313 [650/705] Linking static target lib/librte_pipeline.a 00:04:06.313 [651/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:06.571 [652/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:06.571 [653/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:04:06.571 [654/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:06.829 [655/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:04:06.829 [656/705] Linking target app/dpdk-dumpcap 00:04:06.829 [657/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:04:06.829 [658/705] Linking target app/dpdk-graph 00:04:06.829 [659/705] Linking target app/dpdk-pdump 00:04:07.087 [660/705] Linking target app/dpdk-proc-info 00:04:07.087 [661/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:07.087 [662/705] Linking target app/dpdk-test-cmdline 00:04:07.087 [663/705] Linking target app/dpdk-test-acl 00:04:07.087 [664/705] Linking target app/dpdk-test-bbdev 00:04:07.087 [665/705] Linking target app/dpdk-test-compress-perf 00:04:07.345 [666/705] Linking target app/dpdk-test-crypto-perf 00:04:07.345 [667/705] Linking target app/dpdk-test-dma-perf 00:04:07.345 [668/705] Linking target app/dpdk-test-fib 00:04:07.345 [669/705] Linking target app/dpdk-test-flow-perf 00:04:07.345 [670/705] Linking target app/dpdk-test-eventdev 00:04:07.345 [671/705] Linking target app/dpdk-test-gpudev 00:04:07.604 [672/705] Linking target app/dpdk-test-mldev 00:04:07.604 [673/705] Linking target app/dpdk-test-pipeline 00:04:07.604 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:07.862 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:07.862 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:07.862 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:07.862 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:07.862 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:08.120 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:08.120 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:08.378 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:08.378 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:04:08.378 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:08.378 [685/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.378 [686/705] Linking target lib/librte_pipeline.so.24.0 00:04:08.378 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:08.378 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:08.637 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:08.637 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:08.895 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:08.895 [692/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:08.895 [693/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:09.153 [694/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:09.153 [695/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:09.153 [696/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:09.153 [697/705] Linking target app/dpdk-test-sad 00:04:09.153 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:09.411 [699/705] Linking target app/dpdk-test-regex 00:04:09.411 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:09.411 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:09.670 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:09.928 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:09.928 [704/705] Linking target app/dpdk-test-security-perf 00:04:10.186 [705/705] Linking target app/dpdk-testpmd 00:04:10.186 04:50:26 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:04:10.186 04:50:26 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:10.186 04:50:26 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:10.186 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:10.186 [0/1] Installing files. 00:04:10.448 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.448 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.449 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.450 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.738 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.739 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:10.740 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:10.740 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.740 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.740 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.740 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.740 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.740 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.740 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.741 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.006 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.006 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.006 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.006 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:11.006 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.006 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:11.006 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.006 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:11.006 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.006 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:11.006 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.006 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.007 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.008 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:11.009 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:11.009 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:11.009 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:11.009 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:11.009 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:11.009 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:11.009 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:11.009 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:11.009 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:11.009 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:11.009 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:11.009 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:11.009 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:11.009 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:11.009 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:11.009 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:11.009 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:11.009 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:11.009 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:11.009 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:11.009 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:11.009 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:11.009 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:11.009 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:11.010 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:11.010 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:11.010 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:11.010 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:11.010 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:11.010 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:11.010 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:11.010 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:11.010 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:11.010 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:11.010 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:11.010 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:11.010 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:11.010 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:11.010 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:11.010 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:11.010 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:11.010 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:11.010 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:11.010 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:11.010 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:11.010 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:11.010 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:11.010 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:11.010 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:11.010 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:11.010 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:11.010 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:11.010 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:11.010 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:11.010 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:11.010 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:11.010 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:11.010 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:11.010 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:11.010 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:11.010 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:11.010 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:11.010 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:11.010 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:11.010 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:11.010 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:11.010 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:11.010 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:11.010 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:11.010 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:11.010 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:11.010 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:11.010 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:11.010 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:11.010 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:11.010 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:11.010 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:11.010 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:11.010 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:11.010 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:11.010 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:11.010 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:11.010 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:11.010 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:11.010 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:11.010 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:11.010 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:11.010 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:11.010 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:11.010 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:11.010 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:11.010 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:11.010 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:11.010 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:11.010 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:11.010 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:11.010 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:11.010 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:11.010 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:11.010 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:11.010 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:11.010 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:11.010 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:11.010 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:11.010 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:11.010 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:11.010 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:11.010 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:11.010 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:11.010 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:11.010 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:11.010 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:11.010 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:11.010 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:11.010 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:11.010 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:11.010 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:11.010 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:11.010 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:11.010 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:11.010 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:11.010 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:11.010 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:11.010 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:11.010 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:11.010 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:11.010 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:11.010 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:11.010 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:11.010 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:11.010 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:11.011 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:11.011 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:11.011 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:11.011 04:50:27 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:04:11.011 ************************************ 00:04:11.011 END TEST build_native_dpdk 00:04:11.011 ************************************ 00:04:11.011 04:50:27 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:11.011 00:04:11.011 real 0m38.044s 00:04:11.011 user 4m19.291s 00:04:11.011 sys 0m40.570s 00:04:11.011 04:50:27 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:11.011 04:50:27 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:11.011 04:50:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:11.011 04:50:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:11.011 04:50:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:11.011 04:50:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:11.011 04:50:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:11.011 04:50:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:11.011 04:50:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:11.011 04:50:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:04:11.269 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:11.269 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.269 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:11.269 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:11.835 Using 'verbs' RDMA provider 00:04:22.743 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:35.026 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:35.026 Creating mk/config.mk...done. 00:04:35.026 Creating mk/cc.flags.mk...done. 00:04:35.026 Type 'make' to build. 00:04:35.026 04:50:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:35.026 04:50:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:35.026 04:50:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:35.026 04:50:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:35.026 ************************************ 00:04:35.026 START TEST make 00:04:35.026 ************************************ 00:04:35.026 04:50:50 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:35.026 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:35.026 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:35.026 meson setup builddir \ 00:04:35.026 -Dwith-libaio=enabled \ 00:04:35.026 -Dwith-liburing=enabled \ 00:04:35.026 -Dwith-libvfn=disabled \ 00:04:35.026 -Dwith-spdk=disabled \ 00:04:35.026 -Dexamples=false \ 00:04:35.026 -Dtests=false \ 00:04:35.026 -Dtools=false && \ 00:04:35.026 meson compile -C builddir && \ 00:04:35.026 cd -) 00:04:35.026 make[1]: Nothing to be done for 'all'. 00:04:35.960 The Meson build system 00:04:35.960 Version: 1.5.0 00:04:35.960 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:35.960 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:35.960 Build type: native build 00:04:35.960 Project name: xnvme 00:04:35.960 Project version: 0.7.5 00:04:35.960 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:35.960 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:35.960 Host machine cpu family: x86_64 00:04:35.960 Host machine cpu: x86_64 00:04:35.960 Message: host_machine.system: linux 00:04:35.960 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:35.960 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:35.960 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:35.960 Run-time dependency threads found: YES 00:04:35.960 Has header "setupapi.h" : NO 00:04:35.960 Has header "linux/blkzoned.h" : YES 00:04:35.960 Has header "linux/blkzoned.h" : YES (cached) 00:04:35.960 Has header "libaio.h" : YES 00:04:35.960 Library aio found: YES 00:04:35.960 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:35.960 Run-time dependency liburing found: YES 2.2 00:04:35.960 Dependency libvfn skipped: feature with-libvfn disabled 00:04:35.960 Found CMake: /usr/bin/cmake (3.27.7) 00:04:35.960 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:35.960 Subproject spdk : skipped: feature with-spdk disabled 00:04:35.960 Run-time dependency appleframeworks found: NO (tried framework) 00:04:35.960 Run-time dependency appleframeworks found: NO (tried framework) 00:04:35.960 Library rt found: YES 00:04:35.960 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:35.960 Configuring xnvme_config.h using configuration 00:04:35.960 Configuring xnvme.spec using configuration 00:04:35.960 Run-time dependency bash-completion found: YES 2.11 00:04:35.960 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:35.960 Program cp found: YES (/usr/bin/cp) 00:04:35.960 Build targets in project: 3 00:04:35.960 00:04:35.960 xnvme 0.7.5 00:04:35.960 00:04:35.960 Subprojects 00:04:35.960 spdk : NO Feature 'with-spdk' disabled 00:04:35.960 00:04:35.960 User defined options 00:04:35.960 examples : false 00:04:35.960 tests : false 00:04:35.960 tools : false 00:04:35.960 with-libaio : enabled 00:04:35.960 with-liburing: enabled 00:04:35.960 with-libvfn : disabled 00:04:35.960 with-spdk : disabled 00:04:35.960 00:04:35.960 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:35.960 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:35.960 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:36.217 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:36.218 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:36.218 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:36.218 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:36.218 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:36.218 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:36.218 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:36.218 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:36.218 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:36.218 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:36.218 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:36.218 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:36.218 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:36.218 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:36.218 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:36.218 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:36.218 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:36.218 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:36.218 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:36.218 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:36.218 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:36.218 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:36.476 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:36.476 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:36.476 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:36.476 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:36.476 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:36.476 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:36.476 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:36.476 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:36.476 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:36.476 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:36.476 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:36.476 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:36.476 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:36.476 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:36.476 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:36.476 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:36.476 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:36.476 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:36.476 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:36.476 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:36.476 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:36.476 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:36.476 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:36.476 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:36.476 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:36.476 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:36.476 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:36.476 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:36.476 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:36.477 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:36.477 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:36.477 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:36.477 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:36.477 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:36.477 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:36.735 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:36.735 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:36.735 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:36.735 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:36.735 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:36.735 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:36.735 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:36.735 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:36.735 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:36.735 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:36.735 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:36.735 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:36.735 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:36.735 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:36.735 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:37.299 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:37.299 [75/76] Linking target lib/libxnvme.so.0.7.5 00:04:37.299 [76/76] Linking static target lib/libxnvme.a 00:04:37.300 INFO: autodetecting backend as ninja 00:04:37.300 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:37.300 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:09.365 CC lib/log/log.o 00:05:09.365 CC lib/log/log_deprecated.o 00:05:09.365 CC lib/log/log_flags.o 00:05:09.365 CC lib/ut/ut.o 00:05:09.623 CC lib/ut_mock/mock.o 00:05:09.623 LIB libspdk_ut.a 00:05:09.623 LIB libspdk_log.a 00:05:09.623 SO libspdk_ut.so.2.0 00:05:09.623 SO libspdk_log.so.7.1 00:05:09.623 LIB libspdk_ut_mock.a 00:05:09.623 SO libspdk_ut_mock.so.6.0 00:05:09.623 SYMLINK libspdk_ut.so 00:05:09.623 SYMLINK libspdk_log.so 00:05:09.623 SYMLINK libspdk_ut_mock.so 00:05:09.882 CC lib/util/base64.o 00:05:09.882 CC lib/ioat/ioat.o 00:05:09.882 CC lib/util/cpuset.o 00:05:09.882 CC lib/util/bit_array.o 00:05:09.882 CC lib/util/crc16.o 00:05:09.882 CC lib/util/crc32.o 00:05:09.882 CC lib/dma/dma.o 00:05:09.882 CC lib/util/crc32c.o 00:05:09.882 CXX lib/trace_parser/trace.o 00:05:09.882 CC lib/util/crc32_ieee.o 00:05:09.882 CC lib/vfio_user/host/vfio_user_pci.o 00:05:09.882 CC lib/util/crc64.o 00:05:09.882 CC lib/util/dif.o 00:05:09.882 LIB libspdk_dma.a 00:05:09.882 CC lib/util/fd.o 00:05:09.882 SO libspdk_dma.so.5.0 00:05:10.141 CC lib/util/fd_group.o 00:05:10.141 CC lib/vfio_user/host/vfio_user.o 00:05:10.141 CC lib/util/file.o 00:05:10.141 CC lib/util/hexlify.o 00:05:10.141 SYMLINK libspdk_dma.so 00:05:10.141 CC lib/util/iov.o 00:05:10.141 CC lib/util/math.o 00:05:10.141 LIB libspdk_ioat.a 00:05:10.141 SO libspdk_ioat.so.7.0 00:05:10.141 CC lib/util/net.o 00:05:10.141 CC lib/util/pipe.o 00:05:10.141 CC lib/util/strerror_tls.o 00:05:10.141 SYMLINK libspdk_ioat.so 00:05:10.141 CC lib/util/string.o 00:05:10.141 CC lib/util/uuid.o 00:05:10.141 LIB libspdk_vfio_user.a 00:05:10.141 CC lib/util/xor.o 00:05:10.141 SO libspdk_vfio_user.so.5.0 00:05:10.141 CC lib/util/zipf.o 00:05:10.399 CC lib/util/md5.o 00:05:10.399 SYMLINK libspdk_vfio_user.so 00:05:10.657 LIB libspdk_util.a 00:05:10.657 SO libspdk_util.so.10.1 00:05:10.657 LIB libspdk_trace_parser.a 00:05:10.657 SO libspdk_trace_parser.so.6.0 00:05:10.915 SYMLINK libspdk_util.so 00:05:10.915 SYMLINK libspdk_trace_parser.so 00:05:10.915 CC lib/idxd/idxd.o 00:05:10.915 CC lib/idxd/idxd_user.o 00:05:10.915 CC lib/idxd/idxd_kernel.o 00:05:10.915 CC lib/vmd/vmd.o 00:05:10.915 CC lib/vmd/led.o 00:05:10.915 CC lib/json/json_parse.o 00:05:10.915 CC lib/json/json_util.o 00:05:10.915 CC lib/env_dpdk/env.o 00:05:10.915 CC lib/conf/conf.o 00:05:10.915 CC lib/rdma_utils/rdma_utils.o 00:05:10.915 CC lib/env_dpdk/memory.o 00:05:11.173 CC lib/env_dpdk/pci.o 00:05:11.173 LIB libspdk_conf.a 00:05:11.173 CC lib/json/json_write.o 00:05:11.173 CC lib/env_dpdk/init.o 00:05:11.173 CC lib/env_dpdk/threads.o 00:05:11.173 SO libspdk_conf.so.6.0 00:05:11.173 LIB libspdk_rdma_utils.a 00:05:11.173 SYMLINK libspdk_conf.so 00:05:11.173 CC lib/env_dpdk/pci_ioat.o 00:05:11.173 SO libspdk_rdma_utils.so.1.0 00:05:11.173 SYMLINK libspdk_rdma_utils.so 00:05:11.173 CC lib/env_dpdk/pci_virtio.o 00:05:11.173 CC lib/env_dpdk/pci_vmd.o 00:05:11.430 CC lib/env_dpdk/pci_idxd.o 00:05:11.430 CC lib/env_dpdk/pci_event.o 00:05:11.430 CC lib/env_dpdk/sigbus_handler.o 00:05:11.430 LIB libspdk_json.a 00:05:11.430 CC lib/env_dpdk/pci_dpdk.o 00:05:11.430 SO libspdk_json.so.6.0 00:05:11.430 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:11.430 SYMLINK libspdk_json.so 00:05:11.430 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:11.687 CC lib/rdma_provider/common.o 00:05:11.687 LIB libspdk_idxd.a 00:05:11.687 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:11.687 SO libspdk_idxd.so.12.1 00:05:11.687 LIB libspdk_vmd.a 00:05:11.688 SO libspdk_vmd.so.6.0 00:05:11.688 SYMLINK libspdk_idxd.so 00:05:11.688 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:11.688 CC lib/jsonrpc/jsonrpc_server.o 00:05:11.688 CC lib/jsonrpc/jsonrpc_client.o 00:05:11.688 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:11.688 SYMLINK libspdk_vmd.so 00:05:11.688 LIB libspdk_rdma_provider.a 00:05:11.688 SO libspdk_rdma_provider.so.7.0 00:05:11.947 SYMLINK libspdk_rdma_provider.so 00:05:11.947 LIB libspdk_jsonrpc.a 00:05:11.947 SO libspdk_jsonrpc.so.6.0 00:05:11.947 SYMLINK libspdk_jsonrpc.so 00:05:12.206 CC lib/rpc/rpc.o 00:05:12.465 LIB libspdk_env_dpdk.a 00:05:12.465 SO libspdk_env_dpdk.so.15.1 00:05:12.465 LIB libspdk_rpc.a 00:05:12.465 SO libspdk_rpc.so.6.0 00:05:12.465 SYMLINK libspdk_rpc.so 00:05:12.465 SYMLINK libspdk_env_dpdk.so 00:05:12.723 CC lib/notify/notify_rpc.o 00:05:12.723 CC lib/notify/notify.o 00:05:12.723 CC lib/keyring/keyring_rpc.o 00:05:12.723 CC lib/trace/trace_rpc.o 00:05:12.723 CC lib/trace/trace.o 00:05:12.723 CC lib/trace/trace_flags.o 00:05:12.723 CC lib/keyring/keyring.o 00:05:12.982 LIB libspdk_notify.a 00:05:12.982 SO libspdk_notify.so.6.0 00:05:12.982 LIB libspdk_keyring.a 00:05:12.982 SO libspdk_keyring.so.2.0 00:05:12.982 SYMLINK libspdk_notify.so 00:05:12.982 LIB libspdk_trace.a 00:05:12.982 SYMLINK libspdk_keyring.so 00:05:12.982 SO libspdk_trace.so.11.0 00:05:12.982 SYMLINK libspdk_trace.so 00:05:13.243 CC lib/sock/sock_rpc.o 00:05:13.243 CC lib/sock/sock.o 00:05:13.243 CC lib/thread/thread.o 00:05:13.243 CC lib/thread/iobuf.o 00:05:13.809 LIB libspdk_sock.a 00:05:13.809 SO libspdk_sock.so.10.0 00:05:13.809 SYMLINK libspdk_sock.so 00:05:14.067 CC lib/nvme/nvme_ctrlr.o 00:05:14.067 CC lib/nvme/nvme_fabric.o 00:05:14.067 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:14.067 CC lib/nvme/nvme_ns_cmd.o 00:05:14.067 CC lib/nvme/nvme_ns.o 00:05:14.067 CC lib/nvme/nvme_pcie.o 00:05:14.067 CC lib/nvme/nvme.o 00:05:14.067 CC lib/nvme/nvme_pcie_common.o 00:05:14.067 CC lib/nvme/nvme_qpair.o 00:05:14.636 LIB libspdk_thread.a 00:05:14.636 SO libspdk_thread.so.11.0 00:05:14.636 SYMLINK libspdk_thread.so 00:05:14.636 CC lib/nvme/nvme_quirks.o 00:05:14.636 CC lib/nvme/nvme_transport.o 00:05:14.636 CC lib/nvme/nvme_discovery.o 00:05:14.636 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:14.636 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:14.894 CC lib/nvme/nvme_tcp.o 00:05:14.894 CC lib/nvme/nvme_opal.o 00:05:15.152 CC lib/nvme/nvme_io_msg.o 00:05:15.152 CC lib/accel/accel.o 00:05:15.152 CC lib/blob/blobstore.o 00:05:15.152 CC lib/accel/accel_rpc.o 00:05:15.411 CC lib/nvme/nvme_poll_group.o 00:05:15.411 CC lib/init/json_config.o 00:05:15.411 CC lib/accel/accel_sw.o 00:05:15.411 CC lib/virtio/virtio.o 00:05:15.670 CC lib/fsdev/fsdev.o 00:05:15.670 CC lib/fsdev/fsdev_io.o 00:05:15.670 CC lib/init/subsystem.o 00:05:15.670 CC lib/init/subsystem_rpc.o 00:05:15.929 CC lib/virtio/virtio_vhost_user.o 00:05:15.929 CC lib/init/rpc.o 00:05:15.929 CC lib/fsdev/fsdev_rpc.o 00:05:15.929 CC lib/blob/request.o 00:05:15.929 CC lib/nvme/nvme_zns.o 00:05:15.929 LIB libspdk_init.a 00:05:15.929 CC lib/virtio/virtio_vfio_user.o 00:05:15.929 SO libspdk_init.so.6.0 00:05:16.187 SYMLINK libspdk_init.so 00:05:16.187 CC lib/virtio/virtio_pci.o 00:05:16.187 CC lib/nvme/nvme_stubs.o 00:05:16.187 CC lib/nvme/nvme_auth.o 00:05:16.187 LIB libspdk_accel.a 00:05:16.187 SO libspdk_accel.so.16.0 00:05:16.187 LIB libspdk_fsdev.a 00:05:16.187 CC lib/blob/zeroes.o 00:05:16.187 CC lib/nvme/nvme_cuse.o 00:05:16.187 CC lib/blob/blob_bs_dev.o 00:05:16.187 SO libspdk_fsdev.so.2.0 00:05:16.187 SYMLINK libspdk_accel.so 00:05:16.187 LIB libspdk_virtio.a 00:05:16.187 SYMLINK libspdk_fsdev.so 00:05:16.445 SO libspdk_virtio.so.7.0 00:05:16.445 CC lib/nvme/nvme_rdma.o 00:05:16.445 SYMLINK libspdk_virtio.so 00:05:16.445 CC lib/event/app.o 00:05:16.445 CC lib/event/reactor.o 00:05:16.445 CC lib/bdev/bdev.o 00:05:16.445 CC lib/bdev/bdev_rpc.o 00:05:16.445 CC lib/bdev/bdev_zone.o 00:05:16.445 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:16.703 CC lib/bdev/part.o 00:05:16.703 CC lib/bdev/scsi_nvme.o 00:05:16.703 CC lib/event/log_rpc.o 00:05:16.703 CC lib/event/app_rpc.o 00:05:16.961 CC lib/event/scheduler_static.o 00:05:16.961 LIB libspdk_fuse_dispatcher.a 00:05:16.961 LIB libspdk_event.a 00:05:16.961 SO libspdk_fuse_dispatcher.so.1.0 00:05:16.961 SO libspdk_event.so.14.0 00:05:17.220 SYMLINK libspdk_fuse_dispatcher.so 00:05:17.220 SYMLINK libspdk_event.so 00:05:17.787 LIB libspdk_nvme.a 00:05:17.787 LIB libspdk_blob.a 00:05:17.787 SO libspdk_nvme.so.15.0 00:05:18.048 SO libspdk_blob.so.11.0 00:05:18.048 SYMLINK libspdk_blob.so 00:05:18.048 SYMLINK libspdk_nvme.so 00:05:18.307 CC lib/blobfs/blobfs.o 00:05:18.307 CC lib/blobfs/tree.o 00:05:18.307 CC lib/lvol/lvol.o 00:05:19.242 LIB libspdk_lvol.a 00:05:19.242 SO libspdk_lvol.so.10.0 00:05:19.242 LIB libspdk_blobfs.a 00:05:19.242 SYMLINK libspdk_lvol.so 00:05:19.242 SO libspdk_blobfs.so.10.0 00:05:19.242 LIB libspdk_bdev.a 00:05:19.242 SYMLINK libspdk_blobfs.so 00:05:19.242 SO libspdk_bdev.so.17.0 00:05:19.242 SYMLINK libspdk_bdev.so 00:05:19.500 CC lib/nvmf/ctrlr.o 00:05:19.500 CC lib/nvmf/ctrlr_discovery.o 00:05:19.500 CC lib/nvmf/subsystem.o 00:05:19.500 CC lib/nvmf/ctrlr_bdev.o 00:05:19.500 CC lib/nvmf/nvmf.o 00:05:19.500 CC lib/nvmf/nvmf_rpc.o 00:05:19.500 CC lib/nbd/nbd.o 00:05:19.500 CC lib/scsi/dev.o 00:05:19.500 CC lib/ftl/ftl_core.o 00:05:19.500 CC lib/ublk/ublk.o 00:05:19.759 CC lib/scsi/lun.o 00:05:19.759 CC lib/ftl/ftl_init.o 00:05:20.017 CC lib/nbd/nbd_rpc.o 00:05:20.017 CC lib/ftl/ftl_layout.o 00:05:20.017 CC lib/scsi/port.o 00:05:20.017 CC lib/scsi/scsi.o 00:05:20.017 LIB libspdk_nbd.a 00:05:20.017 CC lib/nvmf/transport.o 00:05:20.017 SO libspdk_nbd.so.7.0 00:05:20.017 CC lib/ftl/ftl_debug.o 00:05:20.017 CC lib/ublk/ublk_rpc.o 00:05:20.017 CC lib/scsi/scsi_bdev.o 00:05:20.275 SYMLINK libspdk_nbd.so 00:05:20.275 CC lib/ftl/ftl_io.o 00:05:20.275 CC lib/scsi/scsi_pr.o 00:05:20.275 CC lib/ftl/ftl_sb.o 00:05:20.275 LIB libspdk_ublk.a 00:05:20.275 SO libspdk_ublk.so.3.0 00:05:20.275 CC lib/scsi/scsi_rpc.o 00:05:20.275 SYMLINK libspdk_ublk.so 00:05:20.275 CC lib/ftl/ftl_l2p.o 00:05:20.275 CC lib/ftl/ftl_l2p_flat.o 00:05:20.275 CC lib/ftl/ftl_nv_cache.o 00:05:20.533 CC lib/scsi/task.o 00:05:20.533 CC lib/nvmf/tcp.o 00:05:20.533 CC lib/nvmf/stubs.o 00:05:20.533 CC lib/nvmf/mdns_server.o 00:05:20.533 CC lib/ftl/ftl_band.o 00:05:20.533 CC lib/ftl/ftl_band_ops.o 00:05:20.533 CC lib/nvmf/rdma.o 00:05:20.533 LIB libspdk_scsi.a 00:05:20.791 SO libspdk_scsi.so.9.0 00:05:20.791 SYMLINK libspdk_scsi.so 00:05:20.791 CC lib/nvmf/auth.o 00:05:20.791 CC lib/ftl/ftl_writer.o 00:05:20.791 CC lib/ftl/ftl_rq.o 00:05:21.048 CC lib/ftl/ftl_reloc.o 00:05:21.048 CC lib/ftl/ftl_l2p_cache.o 00:05:21.048 CC lib/vhost/vhost.o 00:05:21.048 CC lib/iscsi/conn.o 00:05:21.048 CC lib/vhost/vhost_rpc.o 00:05:21.305 CC lib/vhost/vhost_scsi.o 00:05:21.305 CC lib/vhost/vhost_blk.o 00:05:21.305 CC lib/vhost/rte_vhost_user.o 00:05:21.563 CC lib/ftl/ftl_p2l.o 00:05:21.563 CC lib/ftl/ftl_p2l_log.o 00:05:21.563 CC lib/iscsi/init_grp.o 00:05:21.563 CC lib/iscsi/iscsi.o 00:05:21.563 CC lib/iscsi/param.o 00:05:21.821 CC lib/ftl/mngt/ftl_mngt.o 00:05:21.821 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:21.821 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:21.821 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:21.821 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:21.821 CC lib/iscsi/portal_grp.o 00:05:21.821 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:22.080 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:22.080 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:22.080 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:22.080 CC lib/iscsi/tgt_node.o 00:05:22.080 CC lib/iscsi/iscsi_subsystem.o 00:05:22.080 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:22.080 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:22.080 LIB libspdk_vhost.a 00:05:22.080 CC lib/iscsi/iscsi_rpc.o 00:05:22.080 CC lib/iscsi/task.o 00:05:22.339 SO libspdk_vhost.so.8.0 00:05:22.339 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:22.339 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:22.339 SYMLINK libspdk_vhost.so 00:05:22.339 CC lib/ftl/utils/ftl_conf.o 00:05:22.339 CC lib/ftl/utils/ftl_md.o 00:05:22.339 CC lib/ftl/utils/ftl_mempool.o 00:05:22.339 LIB libspdk_nvmf.a 00:05:22.339 CC lib/ftl/utils/ftl_bitmap.o 00:05:22.597 CC lib/ftl/utils/ftl_property.o 00:05:22.597 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:22.597 SO libspdk_nvmf.so.20.0 00:05:22.597 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:22.597 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:22.597 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:22.597 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:22.597 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:22.597 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:22.597 SYMLINK libspdk_nvmf.so 00:05:22.597 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:22.597 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:22.597 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:22.854 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:22.854 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:22.854 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:22.854 CC lib/ftl/base/ftl_base_dev.o 00:05:22.854 CC lib/ftl/base/ftl_base_bdev.o 00:05:22.854 CC lib/ftl/ftl_trace.o 00:05:22.854 LIB libspdk_iscsi.a 00:05:22.854 LIB libspdk_ftl.a 00:05:23.111 SO libspdk_iscsi.so.8.0 00:05:23.111 SO libspdk_ftl.so.9.0 00:05:23.111 SYMLINK libspdk_iscsi.so 00:05:23.368 SYMLINK libspdk_ftl.so 00:05:23.625 CC module/env_dpdk/env_dpdk_rpc.o 00:05:23.625 CC module/sock/posix/posix.o 00:05:23.625 CC module/keyring/file/keyring.o 00:05:23.625 CC module/accel/dsa/accel_dsa.o 00:05:23.625 CC module/fsdev/aio/fsdev_aio.o 00:05:23.625 CC module/accel/ioat/accel_ioat.o 00:05:23.625 CC module/accel/iaa/accel_iaa.o 00:05:23.625 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:23.625 CC module/accel/error/accel_error.o 00:05:23.625 CC module/blob/bdev/blob_bdev.o 00:05:23.625 LIB libspdk_env_dpdk_rpc.a 00:05:23.625 SO libspdk_env_dpdk_rpc.so.6.0 00:05:23.883 SYMLINK libspdk_env_dpdk_rpc.so 00:05:23.883 CC module/keyring/file/keyring_rpc.o 00:05:23.883 CC module/accel/error/accel_error_rpc.o 00:05:23.883 CC module/accel/ioat/accel_ioat_rpc.o 00:05:23.883 LIB libspdk_keyring_file.a 00:05:23.883 LIB libspdk_scheduler_dynamic.a 00:05:23.883 CC module/accel/iaa/accel_iaa_rpc.o 00:05:23.883 CC module/accel/dsa/accel_dsa_rpc.o 00:05:23.883 SO libspdk_keyring_file.so.2.0 00:05:23.883 SO libspdk_scheduler_dynamic.so.4.0 00:05:23.883 LIB libspdk_accel_error.a 00:05:23.883 LIB libspdk_accel_ioat.a 00:05:23.883 SYMLINK libspdk_keyring_file.so 00:05:23.883 SO libspdk_accel_error.so.2.0 00:05:23.883 SYMLINK libspdk_scheduler_dynamic.so 00:05:23.883 SO libspdk_accel_ioat.so.6.0 00:05:23.883 LIB libspdk_blob_bdev.a 00:05:23.883 CC module/keyring/linux/keyring.o 00:05:23.883 LIB libspdk_accel_iaa.a 00:05:23.883 LIB libspdk_accel_dsa.a 00:05:23.883 SO libspdk_blob_bdev.so.11.0 00:05:23.883 SYMLINK libspdk_accel_error.so 00:05:23.883 SO libspdk_accel_iaa.so.3.0 00:05:23.883 SO libspdk_accel_dsa.so.5.0 00:05:24.140 SYMLINK libspdk_accel_ioat.so 00:05:24.140 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:24.140 CC module/keyring/linux/keyring_rpc.o 00:05:24.140 SYMLINK libspdk_blob_bdev.so 00:05:24.140 CC module/fsdev/aio/linux_aio_mgr.o 00:05:24.140 SYMLINK libspdk_accel_iaa.so 00:05:24.140 SYMLINK libspdk_accel_dsa.so 00:05:24.140 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:24.140 CC module/scheduler/gscheduler/gscheduler.o 00:05:24.140 LIB libspdk_keyring_linux.a 00:05:24.140 SO libspdk_keyring_linux.so.1.0 00:05:24.140 SYMLINK libspdk_keyring_linux.so 00:05:24.140 LIB libspdk_scheduler_dpdk_governor.a 00:05:24.140 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:24.140 LIB libspdk_scheduler_gscheduler.a 00:05:24.398 CC module/bdev/delay/vbdev_delay.o 00:05:24.398 CC module/bdev/error/vbdev_error.o 00:05:24.398 CC module/bdev/gpt/gpt.o 00:05:24.398 SO libspdk_scheduler_gscheduler.so.4.0 00:05:24.398 CC module/blobfs/bdev/blobfs_bdev.o 00:05:24.398 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:24.398 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:24.398 CC module/bdev/lvol/vbdev_lvol.o 00:05:24.398 SYMLINK libspdk_scheduler_gscheduler.so 00:05:24.398 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:24.398 LIB libspdk_fsdev_aio.a 00:05:24.398 CC module/bdev/malloc/bdev_malloc.o 00:05:24.398 LIB libspdk_sock_posix.a 00:05:24.398 SO libspdk_fsdev_aio.so.1.0 00:05:24.398 SO libspdk_sock_posix.so.6.0 00:05:24.398 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:24.398 CC module/bdev/gpt/vbdev_gpt.o 00:05:24.398 SYMLINK libspdk_fsdev_aio.so 00:05:24.398 LIB libspdk_blobfs_bdev.a 00:05:24.398 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:24.398 SYMLINK libspdk_sock_posix.so 00:05:24.398 CC module/bdev/error/vbdev_error_rpc.o 00:05:24.398 SO libspdk_blobfs_bdev.so.6.0 00:05:24.656 SYMLINK libspdk_blobfs_bdev.so 00:05:24.656 LIB libspdk_bdev_error.a 00:05:24.656 CC module/bdev/null/bdev_null.o 00:05:24.656 LIB libspdk_bdev_delay.a 00:05:24.656 SO libspdk_bdev_error.so.6.0 00:05:24.656 SO libspdk_bdev_delay.so.6.0 00:05:24.656 CC module/bdev/null/bdev_null_rpc.o 00:05:24.656 CC module/bdev/passthru/vbdev_passthru.o 00:05:24.656 CC module/bdev/nvme/bdev_nvme.o 00:05:24.656 SYMLINK libspdk_bdev_error.so 00:05:24.656 LIB libspdk_bdev_gpt.a 00:05:24.656 SYMLINK libspdk_bdev_delay.so 00:05:24.656 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:24.656 CC module/bdev/raid/bdev_raid.o 00:05:24.656 SO libspdk_bdev_gpt.so.6.0 00:05:24.656 LIB libspdk_bdev_lvol.a 00:05:24.656 LIB libspdk_bdev_malloc.a 00:05:24.656 SO libspdk_bdev_lvol.so.6.0 00:05:24.914 SYMLINK libspdk_bdev_gpt.so 00:05:24.914 SO libspdk_bdev_malloc.so.6.0 00:05:24.914 CC module/bdev/raid/bdev_raid_rpc.o 00:05:24.914 CC module/bdev/split/vbdev_split.o 00:05:24.914 CC module/bdev/raid/bdev_raid_sb.o 00:05:24.914 SYMLINK libspdk_bdev_lvol.so 00:05:24.914 CC module/bdev/raid/raid0.o 00:05:24.914 LIB libspdk_bdev_null.a 00:05:24.914 SYMLINK libspdk_bdev_malloc.so 00:05:24.914 CC module/bdev/raid/raid1.o 00:05:24.914 SO libspdk_bdev_null.so.6.0 00:05:24.914 SYMLINK libspdk_bdev_null.so 00:05:24.914 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:24.914 CC module/bdev/nvme/nvme_rpc.o 00:05:24.914 CC module/bdev/raid/concat.o 00:05:24.914 LIB libspdk_bdev_passthru.a 00:05:24.914 CC module/bdev/split/vbdev_split_rpc.o 00:05:25.171 SO libspdk_bdev_passthru.so.6.0 00:05:25.171 CC module/bdev/nvme/bdev_mdns_client.o 00:05:25.171 SYMLINK libspdk_bdev_passthru.so 00:05:25.171 LIB libspdk_bdev_split.a 00:05:25.171 SO libspdk_bdev_split.so.6.0 00:05:25.171 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:25.171 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:25.171 CC module/bdev/xnvme/bdev_xnvme.o 00:05:25.171 CC module/bdev/aio/bdev_aio.o 00:05:25.171 SYMLINK libspdk_bdev_split.so 00:05:25.171 CC module/bdev/nvme/vbdev_opal.o 00:05:25.171 CC module/bdev/aio/bdev_aio_rpc.o 00:05:25.171 CC module/bdev/ftl/bdev_ftl.o 00:05:25.171 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:25.429 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:25.429 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:25.429 LIB libspdk_bdev_zone_block.a 00:05:25.429 LIB libspdk_bdev_ftl.a 00:05:25.429 CC module/bdev/iscsi/bdev_iscsi.o 00:05:25.429 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:25.429 SO libspdk_bdev_zone_block.so.6.0 00:05:25.429 LIB libspdk_bdev_aio.a 00:05:25.429 SO libspdk_bdev_ftl.so.6.0 00:05:25.429 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:25.429 SO libspdk_bdev_aio.so.6.0 00:05:25.429 SYMLINK libspdk_bdev_zone_block.so 00:05:25.429 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:25.429 SYMLINK libspdk_bdev_ftl.so 00:05:25.429 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:25.429 LIB libspdk_bdev_raid.a 00:05:25.429 SYMLINK libspdk_bdev_aio.so 00:05:25.687 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:25.687 LIB libspdk_bdev_xnvme.a 00:05:25.687 SO libspdk_bdev_xnvme.so.3.0 00:05:25.687 SO libspdk_bdev_raid.so.6.0 00:05:25.687 SYMLINK libspdk_bdev_xnvme.so 00:05:25.687 SYMLINK libspdk_bdev_raid.so 00:05:25.687 LIB libspdk_bdev_iscsi.a 00:05:25.687 SO libspdk_bdev_iscsi.so.6.0 00:05:25.945 SYMLINK libspdk_bdev_iscsi.so 00:05:25.945 LIB libspdk_bdev_virtio.a 00:05:25.945 SO libspdk_bdev_virtio.so.6.0 00:05:25.945 SYMLINK libspdk_bdev_virtio.so 00:05:27.342 LIB libspdk_bdev_nvme.a 00:05:27.342 SO libspdk_bdev_nvme.so.7.1 00:05:27.342 SYMLINK libspdk_bdev_nvme.so 00:05:27.601 CC module/event/subsystems/iobuf/iobuf.o 00:05:27.601 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:27.601 CC module/event/subsystems/scheduler/scheduler.o 00:05:27.601 CC module/event/subsystems/vmd/vmd.o 00:05:27.601 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:27.601 CC module/event/subsystems/keyring/keyring.o 00:05:27.601 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:27.601 CC module/event/subsystems/fsdev/fsdev.o 00:05:27.601 CC module/event/subsystems/sock/sock.o 00:05:27.601 LIB libspdk_event_vhost_blk.a 00:05:27.601 LIB libspdk_event_vmd.a 00:05:27.601 LIB libspdk_event_fsdev.a 00:05:27.601 LIB libspdk_event_keyring.a 00:05:27.601 LIB libspdk_event_scheduler.a 00:05:27.601 SO libspdk_event_vhost_blk.so.3.0 00:05:27.601 SO libspdk_event_vmd.so.6.0 00:05:27.601 SO libspdk_event_fsdev.so.1.0 00:05:27.601 LIB libspdk_event_iobuf.a 00:05:27.601 LIB libspdk_event_sock.a 00:05:27.601 SO libspdk_event_keyring.so.1.0 00:05:27.601 SO libspdk_event_scheduler.so.4.0 00:05:27.601 SO libspdk_event_iobuf.so.3.0 00:05:27.601 SO libspdk_event_sock.so.5.0 00:05:27.601 SYMLINK libspdk_event_vhost_blk.so 00:05:27.601 SYMLINK libspdk_event_vmd.so 00:05:27.859 SYMLINK libspdk_event_fsdev.so 00:05:27.859 SYMLINK libspdk_event_scheduler.so 00:05:27.859 SYMLINK libspdk_event_keyring.so 00:05:27.859 SYMLINK libspdk_event_sock.so 00:05:27.859 SYMLINK libspdk_event_iobuf.so 00:05:28.118 CC module/event/subsystems/accel/accel.o 00:05:28.118 LIB libspdk_event_accel.a 00:05:28.118 SO libspdk_event_accel.so.6.0 00:05:28.118 SYMLINK libspdk_event_accel.so 00:05:28.377 CC module/event/subsystems/bdev/bdev.o 00:05:28.635 LIB libspdk_event_bdev.a 00:05:28.635 SO libspdk_event_bdev.so.6.0 00:05:28.635 SYMLINK libspdk_event_bdev.so 00:05:28.894 CC module/event/subsystems/ublk/ublk.o 00:05:28.894 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:28.894 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:28.894 CC module/event/subsystems/scsi/scsi.o 00:05:28.894 CC module/event/subsystems/nbd/nbd.o 00:05:28.894 LIB libspdk_event_ublk.a 00:05:28.894 LIB libspdk_event_nbd.a 00:05:28.894 SO libspdk_event_ublk.so.3.0 00:05:28.894 LIB libspdk_event_scsi.a 00:05:28.894 SO libspdk_event_nbd.so.6.0 00:05:28.894 SO libspdk_event_scsi.so.6.0 00:05:28.894 SYMLINK libspdk_event_ublk.so 00:05:28.894 SYMLINK libspdk_event_nbd.so 00:05:28.894 LIB libspdk_event_nvmf.a 00:05:29.152 SYMLINK libspdk_event_scsi.so 00:05:29.152 SO libspdk_event_nvmf.so.6.0 00:05:29.152 SYMLINK libspdk_event_nvmf.so 00:05:29.152 CC module/event/subsystems/iscsi/iscsi.o 00:05:29.152 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:29.411 LIB libspdk_event_vhost_scsi.a 00:05:29.411 LIB libspdk_event_iscsi.a 00:05:29.411 SO libspdk_event_vhost_scsi.so.3.0 00:05:29.411 SO libspdk_event_iscsi.so.6.0 00:05:29.411 SYMLINK libspdk_event_vhost_scsi.so 00:05:29.411 SYMLINK libspdk_event_iscsi.so 00:05:29.669 SO libspdk.so.6.0 00:05:29.669 SYMLINK libspdk.so 00:05:29.669 CXX app/trace/trace.o 00:05:29.669 CC app/trace_record/trace_record.o 00:05:29.669 CC test/rpc_client/rpc_client_test.o 00:05:29.669 TEST_HEADER include/spdk/accel.h 00:05:29.669 TEST_HEADER include/spdk/accel_module.h 00:05:29.669 TEST_HEADER include/spdk/assert.h 00:05:29.669 TEST_HEADER include/spdk/barrier.h 00:05:29.669 TEST_HEADER include/spdk/base64.h 00:05:29.669 TEST_HEADER include/spdk/bdev.h 00:05:29.669 TEST_HEADER include/spdk/bdev_module.h 00:05:29.669 TEST_HEADER include/spdk/bdev_zone.h 00:05:29.669 TEST_HEADER include/spdk/bit_array.h 00:05:29.669 TEST_HEADER include/spdk/bit_pool.h 00:05:29.669 TEST_HEADER include/spdk/blob_bdev.h 00:05:29.670 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:29.670 TEST_HEADER include/spdk/blobfs.h 00:05:29.670 TEST_HEADER include/spdk/blob.h 00:05:29.670 TEST_HEADER include/spdk/conf.h 00:05:29.670 CC app/nvmf_tgt/nvmf_main.o 00:05:29.670 TEST_HEADER include/spdk/config.h 00:05:29.670 TEST_HEADER include/spdk/cpuset.h 00:05:29.670 TEST_HEADER include/spdk/crc16.h 00:05:29.670 TEST_HEADER include/spdk/crc32.h 00:05:29.670 TEST_HEADER include/spdk/crc64.h 00:05:29.670 TEST_HEADER include/spdk/dif.h 00:05:29.670 TEST_HEADER include/spdk/dma.h 00:05:29.670 TEST_HEADER include/spdk/endian.h 00:05:29.670 TEST_HEADER include/spdk/env_dpdk.h 00:05:29.670 TEST_HEADER include/spdk/env.h 00:05:29.670 TEST_HEADER include/spdk/event.h 00:05:29.670 TEST_HEADER include/spdk/fd_group.h 00:05:29.670 TEST_HEADER include/spdk/fd.h 00:05:29.670 TEST_HEADER include/spdk/file.h 00:05:29.670 TEST_HEADER include/spdk/fsdev.h 00:05:29.670 TEST_HEADER include/spdk/fsdev_module.h 00:05:29.670 TEST_HEADER include/spdk/ftl.h 00:05:29.670 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:29.670 TEST_HEADER include/spdk/gpt_spec.h 00:05:29.670 TEST_HEADER include/spdk/hexlify.h 00:05:29.670 TEST_HEADER include/spdk/histogram_data.h 00:05:29.670 CC examples/util/zipf/zipf.o 00:05:29.670 TEST_HEADER include/spdk/idxd.h 00:05:29.670 CC test/thread/poller_perf/poller_perf.o 00:05:29.670 TEST_HEADER include/spdk/idxd_spec.h 00:05:29.928 TEST_HEADER include/spdk/init.h 00:05:29.928 TEST_HEADER include/spdk/ioat.h 00:05:29.928 TEST_HEADER include/spdk/ioat_spec.h 00:05:29.928 TEST_HEADER include/spdk/iscsi_spec.h 00:05:29.928 TEST_HEADER include/spdk/json.h 00:05:29.928 TEST_HEADER include/spdk/jsonrpc.h 00:05:29.928 TEST_HEADER include/spdk/keyring.h 00:05:29.928 TEST_HEADER include/spdk/keyring_module.h 00:05:29.928 TEST_HEADER include/spdk/likely.h 00:05:29.928 TEST_HEADER include/spdk/log.h 00:05:29.928 CC test/dma/test_dma/test_dma.o 00:05:29.928 TEST_HEADER include/spdk/lvol.h 00:05:29.928 CC test/app/bdev_svc/bdev_svc.o 00:05:29.928 TEST_HEADER include/spdk/md5.h 00:05:29.928 TEST_HEADER include/spdk/memory.h 00:05:29.928 TEST_HEADER include/spdk/mmio.h 00:05:29.928 TEST_HEADER include/spdk/nbd.h 00:05:29.928 TEST_HEADER include/spdk/net.h 00:05:29.928 TEST_HEADER include/spdk/notify.h 00:05:29.928 TEST_HEADER include/spdk/nvme.h 00:05:29.928 TEST_HEADER include/spdk/nvme_intel.h 00:05:29.928 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:29.928 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:29.928 TEST_HEADER include/spdk/nvme_spec.h 00:05:29.928 TEST_HEADER include/spdk/nvme_zns.h 00:05:29.928 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:29.928 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:29.928 TEST_HEADER include/spdk/nvmf.h 00:05:29.928 TEST_HEADER include/spdk/nvmf_spec.h 00:05:29.928 TEST_HEADER include/spdk/nvmf_transport.h 00:05:29.928 TEST_HEADER include/spdk/opal.h 00:05:29.928 TEST_HEADER include/spdk/opal_spec.h 00:05:29.928 TEST_HEADER include/spdk/pci_ids.h 00:05:29.928 TEST_HEADER include/spdk/pipe.h 00:05:29.928 TEST_HEADER include/spdk/queue.h 00:05:29.928 TEST_HEADER include/spdk/reduce.h 00:05:29.928 CC test/env/mem_callbacks/mem_callbacks.o 00:05:29.928 TEST_HEADER include/spdk/rpc.h 00:05:29.928 TEST_HEADER include/spdk/scheduler.h 00:05:29.928 TEST_HEADER include/spdk/scsi.h 00:05:29.928 TEST_HEADER include/spdk/scsi_spec.h 00:05:29.928 TEST_HEADER include/spdk/sock.h 00:05:29.928 TEST_HEADER include/spdk/stdinc.h 00:05:29.928 TEST_HEADER include/spdk/string.h 00:05:29.928 TEST_HEADER include/spdk/thread.h 00:05:29.928 TEST_HEADER include/spdk/trace.h 00:05:29.928 TEST_HEADER include/spdk/trace_parser.h 00:05:29.928 TEST_HEADER include/spdk/tree.h 00:05:29.928 TEST_HEADER include/spdk/ublk.h 00:05:29.928 TEST_HEADER include/spdk/util.h 00:05:29.928 LINK nvmf_tgt 00:05:29.928 TEST_HEADER include/spdk/uuid.h 00:05:29.928 TEST_HEADER include/spdk/version.h 00:05:29.928 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:29.928 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:29.928 TEST_HEADER include/spdk/vhost.h 00:05:29.928 TEST_HEADER include/spdk/vmd.h 00:05:29.928 TEST_HEADER include/spdk/xor.h 00:05:29.928 TEST_HEADER include/spdk/zipf.h 00:05:29.928 CXX test/cpp_headers/accel.o 00:05:29.928 LINK rpc_client_test 00:05:29.928 LINK poller_perf 00:05:29.928 LINK zipf 00:05:29.928 LINK spdk_trace_record 00:05:29.928 LINK bdev_svc 00:05:29.928 CXX test/cpp_headers/accel_module.o 00:05:29.928 CXX test/cpp_headers/assert.o 00:05:29.928 CXX test/cpp_headers/barrier.o 00:05:29.928 CXX test/cpp_headers/base64.o 00:05:30.187 LINK spdk_trace 00:05:30.187 CXX test/cpp_headers/bdev.o 00:05:30.187 CC examples/ioat/perf/perf.o 00:05:30.187 CC examples/vmd/lsvmd/lsvmd.o 00:05:30.187 CC test/app/histogram_perf/histogram_perf.o 00:05:30.187 CC examples/vmd/led/led.o 00:05:30.187 CC examples/ioat/verify/verify.o 00:05:30.187 LINK test_dma 00:05:30.187 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:30.187 CXX test/cpp_headers/bdev_module.o 00:05:30.445 CC app/iscsi_tgt/iscsi_tgt.o 00:05:30.445 LINK led 00:05:30.445 LINK mem_callbacks 00:05:30.445 LINK lsvmd 00:05:30.445 LINK histogram_perf 00:05:30.445 LINK ioat_perf 00:05:30.445 LINK verify 00:05:30.445 LINK iscsi_tgt 00:05:30.445 CXX test/cpp_headers/bdev_zone.o 00:05:30.445 CC test/app/jsoncat/jsoncat.o 00:05:30.445 CC test/env/vtophys/vtophys.o 00:05:30.445 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:30.445 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:30.445 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:30.704 LINK nvme_fuzz 00:05:30.704 CXX test/cpp_headers/bit_array.o 00:05:30.704 CC app/spdk_tgt/spdk_tgt.o 00:05:30.704 LINK jsoncat 00:05:30.704 LINK vtophys 00:05:30.704 CC examples/idxd/perf/perf.o 00:05:30.704 CC app/spdk_lspci/spdk_lspci.o 00:05:30.704 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:30.704 CXX test/cpp_headers/bit_pool.o 00:05:30.963 CC test/app/stub/stub.o 00:05:30.963 LINK spdk_lspci 00:05:30.963 LINK spdk_tgt 00:05:30.963 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:30.963 CC examples/thread/thread/thread_ex.o 00:05:30.963 CXX test/cpp_headers/blob_bdev.o 00:05:30.963 LINK vhost_fuzz 00:05:30.963 CXX test/cpp_headers/blobfs_bdev.o 00:05:30.963 LINK interrupt_tgt 00:05:30.963 LINK env_dpdk_post_init 00:05:30.963 LINK stub 00:05:30.963 LINK idxd_perf 00:05:30.963 LINK thread 00:05:31.222 CC app/spdk_nvme_perf/perf.o 00:05:31.222 CXX test/cpp_headers/blobfs.o 00:05:31.222 CC app/spdk_nvme_identify/identify.o 00:05:31.222 CC test/env/memory/memory_ut.o 00:05:31.222 CC app/spdk_nvme_discover/discovery_aer.o 00:05:31.222 CC test/event/event_perf/event_perf.o 00:05:31.222 CC test/event/reactor/reactor.o 00:05:31.222 CC examples/sock/hello_world/hello_sock.o 00:05:31.222 CXX test/cpp_headers/blob.o 00:05:31.222 CC test/event/reactor_perf/reactor_perf.o 00:05:31.481 LINK event_perf 00:05:31.481 LINK reactor 00:05:31.481 LINK spdk_nvme_discover 00:05:31.481 CXX test/cpp_headers/conf.o 00:05:31.481 LINK reactor_perf 00:05:31.481 CXX test/cpp_headers/config.o 00:05:31.481 CC app/spdk_top/spdk_top.o 00:05:31.481 LINK hello_sock 00:05:31.481 CC test/event/app_repeat/app_repeat.o 00:05:31.481 CXX test/cpp_headers/cpuset.o 00:05:31.739 CC app/vhost/vhost.o 00:05:31.739 CC app/spdk_dd/spdk_dd.o 00:05:31.739 LINK app_repeat 00:05:31.739 CXX test/cpp_headers/crc16.o 00:05:31.739 LINK vhost 00:05:31.739 LINK spdk_nvme_perf 00:05:31.998 CC examples/accel/perf/accel_perf.o 00:05:31.998 CXX test/cpp_headers/crc32.o 00:05:31.998 LINK spdk_dd 00:05:31.998 CC test/event/scheduler/scheduler.o 00:05:31.998 CXX test/cpp_headers/crc64.o 00:05:31.998 LINK spdk_nvme_identify 00:05:31.998 CC examples/nvme/hello_world/hello_world.o 00:05:31.998 LINK memory_ut 00:05:31.998 CC examples/blob/hello_world/hello_blob.o 00:05:32.257 CXX test/cpp_headers/dif.o 00:05:32.257 CXX test/cpp_headers/dma.o 00:05:32.257 CC examples/blob/cli/blobcli.o 00:05:32.257 LINK scheduler 00:05:32.257 LINK iscsi_fuzz 00:05:32.257 CXX test/cpp_headers/endian.o 00:05:32.257 LINK hello_world 00:05:32.257 LINK hello_blob 00:05:32.257 CC test/env/pci/pci_ut.o 00:05:32.257 CC examples/nvme/reconnect/reconnect.o 00:05:32.257 LINK accel_perf 00:05:32.516 CXX test/cpp_headers/env_dpdk.o 00:05:32.516 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:32.516 LINK spdk_top 00:05:32.516 CXX test/cpp_headers/env.o 00:05:32.516 CC examples/nvme/arbitration/arbitration.o 00:05:32.516 LINK blobcli 00:05:32.516 CC test/nvme/aer/aer.o 00:05:32.516 CXX test/cpp_headers/event.o 00:05:32.774 CC examples/nvme/hotplug/hotplug.o 00:05:32.774 LINK pci_ut 00:05:32.774 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:32.774 LINK reconnect 00:05:32.774 CC app/fio/nvme/fio_plugin.o 00:05:32.774 CXX test/cpp_headers/fd_group.o 00:05:32.774 CXX test/cpp_headers/fd.o 00:05:32.774 CC app/fio/bdev/fio_plugin.o 00:05:32.774 CXX test/cpp_headers/file.o 00:05:32.774 LINK aer 00:05:32.774 LINK arbitration 00:05:32.774 LINK hotplug 00:05:33.032 LINK nvme_manage 00:05:33.032 LINK hello_fsdev 00:05:33.032 CXX test/cpp_headers/fsdev.o 00:05:33.032 CC test/nvme/sgl/sgl.o 00:05:33.032 CC test/nvme/reset/reset.o 00:05:33.032 CXX test/cpp_headers/fsdev_module.o 00:05:33.032 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:33.032 CXX test/cpp_headers/ftl.o 00:05:33.032 CC test/accel/dif/dif.o 00:05:33.291 CC test/blobfs/mkfs/mkfs.o 00:05:33.291 LINK spdk_nvme 00:05:33.291 LINK reset 00:05:33.291 LINK sgl 00:05:33.291 CC examples/bdev/hello_world/hello_bdev.o 00:05:33.291 CC test/lvol/esnap/esnap.o 00:05:33.291 CXX test/cpp_headers/fuse_dispatcher.o 00:05:33.291 LINK cmb_copy 00:05:33.291 LINK spdk_bdev 00:05:33.291 LINK mkfs 00:05:33.291 CC test/nvme/e2edp/nvme_dp.o 00:05:33.291 CC test/nvme/overhead/overhead.o 00:05:33.291 CC examples/bdev/bdevperf/bdevperf.o 00:05:33.548 CXX test/cpp_headers/gpt_spec.o 00:05:33.548 LINK hello_bdev 00:05:33.548 CC examples/nvme/abort/abort.o 00:05:33.548 CXX test/cpp_headers/hexlify.o 00:05:33.548 CC test/nvme/err_injection/err_injection.o 00:05:33.548 CXX test/cpp_headers/histogram_data.o 00:05:33.548 LINK nvme_dp 00:05:33.548 CXX test/cpp_headers/idxd.o 00:05:33.548 LINK overhead 00:05:33.548 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:33.806 LINK err_injection 00:05:33.806 CXX test/cpp_headers/idxd_spec.o 00:05:33.806 CXX test/cpp_headers/init.o 00:05:33.806 LINK pmr_persistence 00:05:33.806 CC test/nvme/startup/startup.o 00:05:33.806 CXX test/cpp_headers/ioat.o 00:05:33.806 CC test/nvme/reserve/reserve.o 00:05:33.806 LINK dif 00:05:33.806 LINK abort 00:05:33.806 CC test/nvme/simple_copy/simple_copy.o 00:05:34.064 CXX test/cpp_headers/ioat_spec.o 00:05:34.064 LINK startup 00:05:34.064 CC test/nvme/boot_partition/boot_partition.o 00:05:34.064 CC test/nvme/connect_stress/connect_stress.o 00:05:34.064 LINK reserve 00:05:34.064 CC test/nvme/compliance/nvme_compliance.o 00:05:34.064 CC test/nvme/fused_ordering/fused_ordering.o 00:05:34.064 CXX test/cpp_headers/iscsi_spec.o 00:05:34.064 CXX test/cpp_headers/json.o 00:05:34.064 LINK simple_copy 00:05:34.064 CXX test/cpp_headers/jsonrpc.o 00:05:34.064 LINK boot_partition 00:05:34.064 LINK bdevperf 00:05:34.064 LINK connect_stress 00:05:34.323 LINK fused_ordering 00:05:34.323 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:34.323 CXX test/cpp_headers/keyring.o 00:05:34.323 CC test/nvme/fdp/fdp.o 00:05:34.323 CC test/nvme/cuse/cuse.o 00:05:34.323 CXX test/cpp_headers/keyring_module.o 00:05:34.323 CXX test/cpp_headers/likely.o 00:05:34.323 CXX test/cpp_headers/log.o 00:05:34.323 CXX test/cpp_headers/lvol.o 00:05:34.323 LINK nvme_compliance 00:05:34.323 LINK doorbell_aers 00:05:34.581 CC test/bdev/bdevio/bdevio.o 00:05:34.581 CXX test/cpp_headers/md5.o 00:05:34.581 CC examples/nvmf/nvmf/nvmf.o 00:05:34.581 CXX test/cpp_headers/memory.o 00:05:34.581 CXX test/cpp_headers/mmio.o 00:05:34.581 CXX test/cpp_headers/nbd.o 00:05:34.581 CXX test/cpp_headers/net.o 00:05:34.581 CXX test/cpp_headers/notify.o 00:05:34.581 LINK fdp 00:05:34.581 CXX test/cpp_headers/nvme.o 00:05:34.581 CXX test/cpp_headers/nvme_intel.o 00:05:34.581 CXX test/cpp_headers/nvme_ocssd.o 00:05:34.839 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:34.839 CXX test/cpp_headers/nvme_spec.o 00:05:34.839 CXX test/cpp_headers/nvme_zns.o 00:05:34.839 LINK nvmf 00:05:34.839 CXX test/cpp_headers/nvmf_cmd.o 00:05:34.839 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:34.839 CXX test/cpp_headers/nvmf.o 00:05:34.839 LINK bdevio 00:05:34.839 CXX test/cpp_headers/nvmf_spec.o 00:05:34.839 CXX test/cpp_headers/nvmf_transport.o 00:05:34.839 CXX test/cpp_headers/opal.o 00:05:34.839 CXX test/cpp_headers/opal_spec.o 00:05:35.098 CXX test/cpp_headers/pci_ids.o 00:05:35.098 CXX test/cpp_headers/pipe.o 00:05:35.098 CXX test/cpp_headers/queue.o 00:05:35.098 CXX test/cpp_headers/reduce.o 00:05:35.098 CXX test/cpp_headers/rpc.o 00:05:35.098 CXX test/cpp_headers/scheduler.o 00:05:35.098 CXX test/cpp_headers/scsi.o 00:05:35.098 CXX test/cpp_headers/scsi_spec.o 00:05:35.098 CXX test/cpp_headers/sock.o 00:05:35.098 CXX test/cpp_headers/stdinc.o 00:05:35.098 CXX test/cpp_headers/string.o 00:05:35.098 CXX test/cpp_headers/thread.o 00:05:35.098 CXX test/cpp_headers/trace.o 00:05:35.098 CXX test/cpp_headers/trace_parser.o 00:05:35.098 CXX test/cpp_headers/tree.o 00:05:35.098 CXX test/cpp_headers/ublk.o 00:05:35.356 CXX test/cpp_headers/util.o 00:05:35.356 CXX test/cpp_headers/uuid.o 00:05:35.356 CXX test/cpp_headers/version.o 00:05:35.356 CXX test/cpp_headers/vfio_user_pci.o 00:05:35.356 CXX test/cpp_headers/vfio_user_spec.o 00:05:35.356 CXX test/cpp_headers/vhost.o 00:05:35.356 LINK cuse 00:05:35.356 CXX test/cpp_headers/vmd.o 00:05:35.356 CXX test/cpp_headers/xor.o 00:05:35.357 CXX test/cpp_headers/zipf.o 00:05:38.675 LINK esnap 00:05:38.675 00:05:38.675 real 1m4.941s 00:05:38.675 user 5m12.810s 00:05:38.675 sys 0m55.358s 00:05:38.675 04:51:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:38.675 ************************************ 00:05:38.675 END TEST make 00:05:38.675 ************************************ 00:05:38.675 04:51:55 make -- common/autotest_common.sh@10 -- $ set +x 00:05:38.675 04:51:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:38.675 04:51:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:38.675 04:51:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:38.675 04:51:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.675 04:51:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:38.675 04:51:55 -- pm/common@44 -- $ pid=5811 00:05:38.675 04:51:55 -- pm/common@50 -- $ kill -TERM 5811 00:05:38.675 04:51:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.675 04:51:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:38.675 04:51:55 -- pm/common@44 -- $ pid=5812 00:05:38.675 04:51:55 -- pm/common@50 -- $ kill -TERM 5812 00:05:38.675 04:51:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:38.675 04:51:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:38.675 04:51:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.675 04:51:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.675 04:51:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.675 04:51:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.675 04:51:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.675 04:51:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.675 04:51:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.675 04:51:55 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.675 04:51:55 -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.675 04:51:55 -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.675 04:51:55 -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.675 04:51:55 -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.675 04:51:55 -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.675 04:51:55 -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.675 04:51:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.675 04:51:55 -- scripts/common.sh@344 -- # case "$op" in 00:05:38.675 04:51:55 -- scripts/common.sh@345 -- # : 1 00:05:38.675 04:51:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.675 04:51:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.675 04:51:55 -- scripts/common.sh@365 -- # decimal 1 00:05:38.675 04:51:55 -- scripts/common.sh@353 -- # local d=1 00:05:38.675 04:51:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.675 04:51:55 -- scripts/common.sh@355 -- # echo 1 00:05:38.675 04:51:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.675 04:51:55 -- scripts/common.sh@366 -- # decimal 2 00:05:38.675 04:51:55 -- scripts/common.sh@353 -- # local d=2 00:05:38.675 04:51:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.675 04:51:55 -- scripts/common.sh@355 -- # echo 2 00:05:38.675 04:51:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.675 04:51:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.675 04:51:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.675 04:51:55 -- scripts/common.sh@368 -- # return 0 00:05:38.675 04:51:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.675 04:51:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.675 --rc genhtml_branch_coverage=1 00:05:38.675 --rc genhtml_function_coverage=1 00:05:38.675 --rc genhtml_legend=1 00:05:38.675 --rc geninfo_all_blocks=1 00:05:38.675 --rc geninfo_unexecuted_blocks=1 00:05:38.675 00:05:38.675 ' 00:05:38.675 04:51:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.675 --rc genhtml_branch_coverage=1 00:05:38.675 --rc genhtml_function_coverage=1 00:05:38.675 --rc genhtml_legend=1 00:05:38.675 --rc geninfo_all_blocks=1 00:05:38.675 --rc geninfo_unexecuted_blocks=1 00:05:38.675 00:05:38.675 ' 00:05:38.675 04:51:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.675 --rc genhtml_branch_coverage=1 00:05:38.675 --rc genhtml_function_coverage=1 00:05:38.675 --rc genhtml_legend=1 00:05:38.675 --rc geninfo_all_blocks=1 00:05:38.675 --rc geninfo_unexecuted_blocks=1 00:05:38.675 00:05:38.675 ' 00:05:38.675 04:51:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.675 --rc genhtml_branch_coverage=1 00:05:38.675 --rc genhtml_function_coverage=1 00:05:38.675 --rc genhtml_legend=1 00:05:38.675 --rc geninfo_all_blocks=1 00:05:38.675 --rc geninfo_unexecuted_blocks=1 00:05:38.675 00:05:38.675 ' 00:05:38.675 04:51:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:38.675 04:51:55 -- nvmf/common.sh@7 -- # uname -s 00:05:38.675 04:51:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.675 04:51:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.675 04:51:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.675 04:51:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.675 04:51:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.675 04:51:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.675 04:51:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.675 04:51:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.675 04:51:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.675 04:51:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.675 04:51:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dbf62c29-9d5f-4666-9cee-22902cff7e75 00:05:38.675 04:51:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=dbf62c29-9d5f-4666-9cee-22902cff7e75 00:05:38.675 04:51:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.675 04:51:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.676 04:51:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.676 04:51:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.676 04:51:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.676 04:51:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.676 04:51:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.676 04:51:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.676 04:51:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.676 04:51:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.676 04:51:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.676 04:51:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.676 04:51:55 -- paths/export.sh@5 -- # export PATH 00:05:38.676 04:51:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.676 04:51:55 -- nvmf/common.sh@51 -- # : 0 00:05:38.676 04:51:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.676 04:51:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.676 04:51:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.676 04:51:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.676 04:51:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.676 04:51:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.676 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.676 04:51:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.676 04:51:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.676 04:51:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.676 04:51:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:38.676 04:51:55 -- spdk/autotest.sh@32 -- # uname -s 00:05:38.938 04:51:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:38.938 04:51:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:38.938 04:51:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:38.938 04:51:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:38.938 04:51:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:38.938 04:51:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:38.938 04:51:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:38.938 04:51:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:38.938 04:51:55 -- spdk/autotest.sh@48 -- # udevadm_pid=66633 00:05:38.938 04:51:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:38.938 04:51:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:38.938 04:51:55 -- pm/common@17 -- # local monitor 00:05:38.938 04:51:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.938 04:51:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:38.938 04:51:55 -- pm/common@25 -- # sleep 1 00:05:38.938 04:51:55 -- pm/common@21 -- # date +%s 00:05:38.938 04:51:55 -- pm/common@21 -- # date +%s 00:05:38.938 04:51:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732164715 00:05:38.938 04:51:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732164715 00:05:38.938 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732164715_collect-cpu-load.pm.log 00:05:38.938 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732164715_collect-vmstat.pm.log 00:05:39.880 04:51:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:39.880 04:51:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:39.880 04:51:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.880 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:39.880 04:51:56 -- spdk/autotest.sh@59 -- # create_test_list 00:05:39.880 04:51:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:39.880 04:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:39.880 04:51:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:39.880 04:51:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:39.880 04:51:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:39.880 04:51:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:39.880 04:51:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:39.880 04:51:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:39.880 04:51:56 -- common/autotest_common.sh@1457 -- # uname 00:05:39.880 04:51:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:39.880 04:51:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:39.880 04:51:56 -- common/autotest_common.sh@1477 -- # uname 00:05:39.880 04:51:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:39.880 04:51:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:39.881 04:51:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:39.881 lcov: LCOV version 1.15 00:05:39.881 04:51:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:54.790 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:54.790 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:12.905 04:52:26 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:12.905 04:52:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.905 04:52:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.905 04:52:26 -- spdk/autotest.sh@78 -- # rm -f 00:06:12.905 04:52:26 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:12.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.905 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:12.905 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:12.905 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:12.905 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:12.905 04:52:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:12.905 04:52:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:12.905 04:52:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:12.905 04:52:27 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:12.905 04:52:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.905 04:52:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:12.905 04:52:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:12.905 04:52:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:12.905 04:52:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.905 04:52:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.905 04:52:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:12.905 04:52:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:12.905 04:52:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:12.905 04:52:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.905 04:52:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.905 04:52:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:12.905 04:52:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:12.905 04:52:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:12.905 04:52:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.905 04:52:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.906 04:52:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:12.906 04:52:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:12.906 04:52:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:12.906 04:52:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.906 04:52:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.906 04:52:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:12.906 04:52:27 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:12.906 04:52:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:12.906 04:52:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.906 04:52:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.906 04:52:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:12.906 04:52:27 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:12.906 04:52:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:12.906 04:52:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.906 04:52:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.906 04:52:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:12.906 04:52:27 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:12.906 04:52:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:12.906 04:52:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.906 04:52:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:12.906 04:52:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.906 04:52:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.906 04:52:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:12.906 04:52:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:12.906 04:52:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:12.906 No valid GPT data, bailing 00:06:12.906 04:52:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:12.906 04:52:27 -- scripts/common.sh@394 -- # pt= 00:06:12.906 04:52:27 -- scripts/common.sh@395 -- # return 1 00:06:12.906 04:52:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:12.906 1+0 records in 00:06:12.906 1+0 records out 00:06:12.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289396 s, 36.2 MB/s 00:06:12.906 04:52:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.906 04:52:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.906 04:52:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:12.906 04:52:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:12.906 04:52:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:12.906 No valid GPT data, bailing 00:06:12.906 04:52:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:12.906 04:52:27 -- scripts/common.sh@394 -- # pt= 00:06:12.906 04:52:27 -- scripts/common.sh@395 -- # return 1 00:06:12.906 04:52:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:12.906 1+0 records in 00:06:12.906 1+0 records out 00:06:12.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493465 s, 212 MB/s 00:06:12.906 04:52:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.906 04:52:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.906 04:52:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:12.906 04:52:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:12.906 04:52:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:12.906 No valid GPT data, bailing 00:06:12.906 04:52:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:12.906 04:52:27 -- scripts/common.sh@394 -- # pt= 00:06:12.906 04:52:27 -- scripts/common.sh@395 -- # return 1 00:06:12.906 04:52:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:12.906 1+0 records in 00:06:12.906 1+0 records out 00:06:12.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00654993 s, 160 MB/s 00:06:12.906 04:52:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.906 04:52:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.906 04:52:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:12.906 04:52:27 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:12.906 04:52:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:12.906 No valid GPT data, bailing 00:06:12.906 04:52:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:12.906 04:52:28 -- scripts/common.sh@394 -- # pt= 00:06:12.906 04:52:28 -- scripts/common.sh@395 -- # return 1 00:06:12.906 04:52:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:12.906 1+0 records in 00:06:12.906 1+0 records out 00:06:12.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574346 s, 183 MB/s 00:06:12.906 04:52:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.906 04:52:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.906 04:52:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:12.906 04:52:28 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:12.906 04:52:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:12.906 No valid GPT data, bailing 00:06:12.906 04:52:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:12.906 04:52:28 -- scripts/common.sh@394 -- # pt= 00:06:12.906 04:52:28 -- scripts/common.sh@395 -- # return 1 00:06:12.906 04:52:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:12.906 1+0 records in 00:06:12.906 1+0 records out 00:06:12.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478594 s, 219 MB/s 00:06:12.906 04:52:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.906 04:52:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.906 04:52:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:12.906 04:52:28 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:12.906 04:52:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:12.906 No valid GPT data, bailing 00:06:12.906 04:52:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:12.906 04:52:28 -- scripts/common.sh@394 -- # pt= 00:06:12.906 04:52:28 -- scripts/common.sh@395 -- # return 1 00:06:12.906 04:52:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:12.906 1+0 records in 00:06:12.906 1+0 records out 00:06:12.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00631599 s, 166 MB/s 00:06:12.906 04:52:28 -- spdk/autotest.sh@105 -- # sync 00:06:12.906 04:52:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:12.906 04:52:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:12.906 04:52:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:13.478 04:52:30 -- spdk/autotest.sh@111 -- # uname -s 00:06:13.478 04:52:30 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:13.478 04:52:30 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:13.478 04:52:30 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:14.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.620 Hugepages 00:06:14.620 node hugesize free / total 00:06:14.620 node0 1048576kB 0 / 0 00:06:14.620 node0 2048kB 0 / 0 00:06:14.620 00:06:14.620 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:14.620 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:14.620 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:14.620 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:06:14.882 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:14.882 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:14.882 04:52:31 -- spdk/autotest.sh@117 -- # uname -s 00:06:14.882 04:52:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:14.882 04:52:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:14.882 04:52:31 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:15.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.027 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.027 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.027 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.027 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.027 04:52:32 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:17.413 04:52:33 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:17.413 04:52:33 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:17.413 04:52:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:17.413 04:52:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:17.413 04:52:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:17.413 04:52:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:17.413 04:52:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.413 04:52:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.413 04:52:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:17.413 04:52:33 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:17.413 04:52:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:17.413 04:52:33 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:17.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.674 Waiting for block devices as requested 00:06:17.674 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.674 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.935 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.935 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:23.228 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:23.228 04:52:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.228 04:52:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:23.228 04:52:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:23.228 04:52:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1543 -- # continue 00:06:23.228 04:52:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:23.228 04:52:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1543 -- # continue 00:06:23.228 04:52:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:23.228 04:52:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1543 -- # continue 00:06:23.228 04:52:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:06:23.228 04:52:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:23.228 04:52:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:23.228 04:52:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:23.228 04:52:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:23.228 04:52:39 -- common/autotest_common.sh@1543 -- # continue 00:06:23.228 04:52:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:23.228 04:52:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.228 04:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.228 04:52:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:23.228 04:52:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.228 04:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.228 04:52:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:23.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.063 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.063 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.324 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.324 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:24.324 04:52:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:24.324 04:52:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.324 04:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:24.324 04:52:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:24.324 04:52:40 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:24.324 04:52:40 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:24.324 04:52:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:24.324 04:52:40 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:24.324 04:52:40 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:24.324 04:52:40 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:24.324 04:52:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:24.324 04:52:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:24.324 04:52:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:24.324 04:52:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:24.324 04:52:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:24.324 04:52:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:24.324 04:52:41 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:24.324 04:52:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:24.324 04:52:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:24.324 04:52:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.324 04:52:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:24.324 04:52:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.324 04:52:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:24.324 04:52:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.324 04:52:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:24.324 04:52:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:24.324 04:52:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:24.325 04:52:41 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:24.325 04:52:41 -- common/autotest_common.sh@1572 -- # return 0 00:06:24.325 04:52:41 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:24.325 04:52:41 -- common/autotest_common.sh@1580 -- # return 0 00:06:24.325 04:52:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:24.325 04:52:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:24.325 04:52:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:24.325 04:52:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:24.325 04:52:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:24.325 04:52:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.325 04:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.325 04:52:41 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:24.325 04:52:41 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:24.325 04:52:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.325 04:52:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.325 04:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.325 ************************************ 00:06:24.325 START TEST env 00:06:24.325 ************************************ 00:06:24.325 04:52:41 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:24.584 * Looking for test storage... 00:06:24.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.584 04:52:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.584 04:52:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.584 04:52:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.584 04:52:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.584 04:52:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.584 04:52:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.584 04:52:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.584 04:52:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.584 04:52:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.584 04:52:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.584 04:52:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.584 04:52:41 env -- scripts/common.sh@344 -- # case "$op" in 00:06:24.584 04:52:41 env -- scripts/common.sh@345 -- # : 1 00:06:24.584 04:52:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.584 04:52:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.584 04:52:41 env -- scripts/common.sh@365 -- # decimal 1 00:06:24.584 04:52:41 env -- scripts/common.sh@353 -- # local d=1 00:06:24.584 04:52:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.584 04:52:41 env -- scripts/common.sh@355 -- # echo 1 00:06:24.584 04:52:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.584 04:52:41 env -- scripts/common.sh@366 -- # decimal 2 00:06:24.584 04:52:41 env -- scripts/common.sh@353 -- # local d=2 00:06:24.584 04:52:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.584 04:52:41 env -- scripts/common.sh@355 -- # echo 2 00:06:24.584 04:52:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.584 04:52:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.584 04:52:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.584 04:52:41 env -- scripts/common.sh@368 -- # return 0 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.584 --rc genhtml_branch_coverage=1 00:06:24.584 --rc genhtml_function_coverage=1 00:06:24.584 --rc genhtml_legend=1 00:06:24.584 --rc geninfo_all_blocks=1 00:06:24.584 --rc geninfo_unexecuted_blocks=1 00:06:24.584 00:06:24.584 ' 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.584 --rc genhtml_branch_coverage=1 00:06:24.584 --rc genhtml_function_coverage=1 00:06:24.584 --rc genhtml_legend=1 00:06:24.584 --rc geninfo_all_blocks=1 00:06:24.584 --rc geninfo_unexecuted_blocks=1 00:06:24.584 00:06:24.584 ' 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.584 --rc genhtml_branch_coverage=1 00:06:24.584 --rc genhtml_function_coverage=1 00:06:24.584 --rc genhtml_legend=1 00:06:24.584 --rc geninfo_all_blocks=1 00:06:24.584 --rc geninfo_unexecuted_blocks=1 00:06:24.584 00:06:24.584 ' 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.584 --rc genhtml_branch_coverage=1 00:06:24.584 --rc genhtml_function_coverage=1 00:06:24.584 --rc genhtml_legend=1 00:06:24.584 --rc geninfo_all_blocks=1 00:06:24.584 --rc geninfo_unexecuted_blocks=1 00:06:24.584 00:06:24.584 ' 00:06:24.584 04:52:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.584 04:52:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.584 04:52:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.584 ************************************ 00:06:24.584 START TEST env_memory 00:06:24.584 ************************************ 00:06:24.584 04:52:41 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:24.584 00:06:24.584 00:06:24.584 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.584 http://cunit.sourceforge.net/ 00:06:24.584 00:06:24.584 00:06:24.584 Suite: memory 00:06:24.584 Test: alloc and free memory map ...[2024-11-21 04:52:41.274954] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:24.584 passed 00:06:24.584 Test: mem map translation ...[2024-11-21 04:52:41.313788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:24.584 [2024-11-21 04:52:41.313903] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:24.584 [2024-11-21 04:52:41.314027] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:24.584 [2024-11-21 04:52:41.314067] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:24.846 passed 00:06:24.846 Test: mem map registration ...[2024-11-21 04:52:41.382190] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:24.846 [2024-11-21 04:52:41.382293] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:24.846 passed 00:06:24.846 Test: mem map adjacent registrations ...passed 00:06:24.846 00:06:24.846 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.846 suites 1 1 n/a 0 0 00:06:24.846 tests 4 4 4 0 0 00:06:24.846 asserts 152 152 152 0 n/a 00:06:24.846 00:06:24.846 Elapsed time = 0.233 seconds 00:06:24.846 00:06:24.846 real 0m0.268s 00:06:24.846 user 0m0.239s 00:06:24.846 sys 0m0.019s 00:06:24.846 04:52:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.846 04:52:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:24.846 ************************************ 00:06:24.846 END TEST env_memory 00:06:24.846 ************************************ 00:06:24.846 04:52:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:24.846 04:52:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.846 04:52:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.846 04:52:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.846 ************************************ 00:06:24.846 START TEST env_vtophys 00:06:24.846 ************************************ 00:06:24.846 04:52:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:24.846 EAL: lib.eal log level changed from notice to debug 00:06:24.846 EAL: Detected lcore 0 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 1 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 2 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 3 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 4 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 5 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 6 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 7 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 8 as core 0 on socket 0 00:06:24.846 EAL: Detected lcore 9 as core 0 on socket 0 00:06:25.107 EAL: Maximum logical cores by configuration: 128 00:06:25.107 EAL: Detected CPU lcores: 10 00:06:25.107 EAL: Detected NUMA nodes: 1 00:06:25.107 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:25.107 EAL: Detected shared linkage of DPDK 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:25.107 EAL: Registered [vdev] bus. 00:06:25.107 EAL: bus.vdev log level changed from disabled to notice 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:25.107 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:25.107 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:25.107 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:25.107 EAL: No shared files mode enabled, IPC will be disabled 00:06:25.107 EAL: No shared files mode enabled, IPC is disabled 00:06:25.107 EAL: Selected IOVA mode 'PA' 00:06:25.107 EAL: Probing VFIO support... 00:06:25.107 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:25.107 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:25.107 EAL: Ask a virtual area of 0x2e000 bytes 00:06:25.107 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:25.107 EAL: Setting up physically contiguous memory... 00:06:25.107 EAL: Setting maximum number of open files to 524288 00:06:25.107 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:25.107 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:25.107 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.107 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:25.107 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.107 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.107 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:25.107 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:25.107 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.107 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:25.107 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.108 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.108 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:25.108 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:25.108 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.108 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:25.108 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.108 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.108 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:25.108 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:25.108 EAL: Ask a virtual area of 0x61000 bytes 00:06:25.108 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:25.108 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:25.108 EAL: Ask a virtual area of 0x400000000 bytes 00:06:25.108 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:25.108 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:25.108 EAL: Hugepages will be freed exactly as allocated. 00:06:25.108 EAL: No shared files mode enabled, IPC is disabled 00:06:25.108 EAL: No shared files mode enabled, IPC is disabled 00:06:25.108 EAL: TSC frequency is ~2600000 KHz 00:06:25.108 EAL: Main lcore 0 is ready (tid=7fd112a46a40;cpuset=[0]) 00:06:25.108 EAL: Trying to obtain current memory policy. 00:06:25.108 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.108 EAL: Restoring previous memory policy: 0 00:06:25.108 EAL: request: mp_malloc_sync 00:06:25.108 EAL: No shared files mode enabled, IPC is disabled 00:06:25.108 EAL: Heap on socket 0 was expanded by 2MB 00:06:25.108 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:25.108 EAL: No shared files mode enabled, IPC is disabled 00:06:25.108 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:25.108 EAL: Mem event callback 'spdk:(nil)' registered 00:06:25.108 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:25.108 00:06:25.108 00:06:25.108 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.108 http://cunit.sourceforge.net/ 00:06:25.108 00:06:25.108 00:06:25.108 Suite: components_suite 00:06:25.369 Test: vtophys_malloc_test ...passed 00:06:25.369 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:25.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.369 EAL: Restoring previous memory policy: 4 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was expanded by 4MB 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was shrunk by 4MB 00:06:25.369 EAL: Trying to obtain current memory policy. 00:06:25.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.369 EAL: Restoring previous memory policy: 4 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was expanded by 6MB 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was shrunk by 6MB 00:06:25.369 EAL: Trying to obtain current memory policy. 00:06:25.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.369 EAL: Restoring previous memory policy: 4 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was expanded by 10MB 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was shrunk by 10MB 00:06:25.369 EAL: Trying to obtain current memory policy. 00:06:25.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.369 EAL: Restoring previous memory policy: 4 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was expanded by 18MB 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was shrunk by 18MB 00:06:25.369 EAL: Trying to obtain current memory policy. 00:06:25.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.369 EAL: Restoring previous memory policy: 4 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was expanded by 34MB 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was shrunk by 34MB 00:06:25.369 EAL: Trying to obtain current memory policy. 00:06:25.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.369 EAL: Restoring previous memory policy: 4 00:06:25.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.369 EAL: request: mp_malloc_sync 00:06:25.369 EAL: No shared files mode enabled, IPC is disabled 00:06:25.369 EAL: Heap on socket 0 was expanded by 66MB 00:06:25.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.631 EAL: request: mp_malloc_sync 00:06:25.631 EAL: No shared files mode enabled, IPC is disabled 00:06:25.631 EAL: Heap on socket 0 was shrunk by 66MB 00:06:25.631 EAL: Trying to obtain current memory policy. 00:06:25.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.631 EAL: Restoring previous memory policy: 4 00:06:25.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.631 EAL: request: mp_malloc_sync 00:06:25.631 EAL: No shared files mode enabled, IPC is disabled 00:06:25.631 EAL: Heap on socket 0 was expanded by 130MB 00:06:25.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.631 EAL: request: mp_malloc_sync 00:06:25.631 EAL: No shared files mode enabled, IPC is disabled 00:06:25.631 EAL: Heap on socket 0 was shrunk by 130MB 00:06:25.631 EAL: Trying to obtain current memory policy. 00:06:25.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.631 EAL: Restoring previous memory policy: 4 00:06:25.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.631 EAL: request: mp_malloc_sync 00:06:25.631 EAL: No shared files mode enabled, IPC is disabled 00:06:25.631 EAL: Heap on socket 0 was expanded by 258MB 00:06:25.631 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.631 EAL: request: mp_malloc_sync 00:06:25.631 EAL: No shared files mode enabled, IPC is disabled 00:06:25.631 EAL: Heap on socket 0 was shrunk by 258MB 00:06:25.631 EAL: Trying to obtain current memory policy. 00:06:25.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:25.893 EAL: Restoring previous memory policy: 4 00:06:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.893 EAL: request: mp_malloc_sync 00:06:25.893 EAL: No shared files mode enabled, IPC is disabled 00:06:25.893 EAL: Heap on socket 0 was expanded by 514MB 00:06:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.154 EAL: request: mp_malloc_sync 00:06:26.154 EAL: No shared files mode enabled, IPC is disabled 00:06:26.154 EAL: Heap on socket 0 was shrunk by 514MB 00:06:26.154 EAL: Trying to obtain current memory policy. 00:06:26.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:26.416 EAL: Restoring previous memory policy: 4 00:06:26.416 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.416 EAL: request: mp_malloc_sync 00:06:26.416 EAL: No shared files mode enabled, IPC is disabled 00:06:26.416 EAL: Heap on socket 0 was expanded by 1026MB 00:06:26.677 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.677 passed 00:06:26.677 00:06:26.677 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.677 suites 1 1 n/a 0 0 00:06:26.677 tests 2 2 2 0 0 00:06:26.677 asserts 5575 5575 5575 0 n/a 00:06:26.677 00:06:26.677 Elapsed time = 1.545 seconds 00:06:26.677 EAL: request: mp_malloc_sync 00:06:26.677 EAL: No shared files mode enabled, IPC is disabled 00:06:26.677 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:26.677 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.677 EAL: request: mp_malloc_sync 00:06:26.677 EAL: No shared files mode enabled, IPC is disabled 00:06:26.677 EAL: Heap on socket 0 was shrunk by 2MB 00:06:26.677 EAL: No shared files mode enabled, IPC is disabled 00:06:26.677 EAL: No shared files mode enabled, IPC is disabled 00:06:26.677 EAL: No shared files mode enabled, IPC is disabled 00:06:26.677 00:06:26.677 real 0m1.805s 00:06:26.677 user 0m0.769s 00:06:26.677 sys 0m0.886s 00:06:26.677 04:52:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.677 04:52:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:26.677 ************************************ 00:06:26.677 END TEST env_vtophys 00:06:26.677 ************************************ 00:06:26.941 04:52:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:26.941 04:52:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.941 04:52:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.941 04:52:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.941 ************************************ 00:06:26.941 START TEST env_pci 00:06:26.941 ************************************ 00:06:26.941 04:52:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:26.941 00:06:26.941 00:06:26.941 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.941 http://cunit.sourceforge.net/ 00:06:26.941 00:06:26.941 00:06:26.941 Suite: pci 00:06:26.941 Test: pci_hook ...[2024-11-21 04:52:43.443358] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69389 has claimed it 00:06:26.941 EAL: Cannot find device (10000:00:01.0) 00:06:26.941 passed 00:06:26.941 00:06:26.941 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.941 suites 1 1 n/a 0 0 00:06:26.941 tests 1 1 1 0 0 00:06:26.941 asserts 25 25 25 0 n/a 00:06:26.941 00:06:26.941 Elapsed time = 0.005 seconds 00:06:26.941 EAL: Failed to attach device on primary process 00:06:26.941 00:06:26.941 real 0m0.058s 00:06:26.941 user 0m0.023s 00:06:26.941 sys 0m0.035s 00:06:26.941 04:52:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.941 04:52:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:26.941 ************************************ 00:06:26.941 END TEST env_pci 00:06:26.941 ************************************ 00:06:26.941 04:52:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:26.941 04:52:43 env -- env/env.sh@15 -- # uname 00:06:26.941 04:52:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:26.941 04:52:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:26.941 04:52:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:26.941 04:52:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:26.941 04:52:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.941 04:52:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.941 ************************************ 00:06:26.941 START TEST env_dpdk_post_init 00:06:26.941 ************************************ 00:06:26.941 04:52:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:26.941 EAL: Detected CPU lcores: 10 00:06:26.941 EAL: Detected NUMA nodes: 1 00:06:26.941 EAL: Detected shared linkage of DPDK 00:06:26.941 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:26.941 EAL: Selected IOVA mode 'PA' 00:06:27.202 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:27.202 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:27.202 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:27.202 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:27.202 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:27.202 Starting DPDK initialization... 00:06:27.202 Starting SPDK post initialization... 00:06:27.202 SPDK NVMe probe 00:06:27.202 Attaching to 0000:00:10.0 00:06:27.202 Attaching to 0000:00:11.0 00:06:27.202 Attaching to 0000:00:12.0 00:06:27.202 Attaching to 0000:00:13.0 00:06:27.202 Attached to 0000:00:11.0 00:06:27.202 Attached to 0000:00:13.0 00:06:27.202 Attached to 0000:00:10.0 00:06:27.202 Attached to 0000:00:12.0 00:06:27.202 Cleaning up... 00:06:27.202 00:06:27.202 real 0m0.230s 00:06:27.202 user 0m0.061s 00:06:27.202 sys 0m0.071s 00:06:27.202 04:52:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.202 04:52:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:27.202 ************************************ 00:06:27.202 END TEST env_dpdk_post_init 00:06:27.202 ************************************ 00:06:27.202 04:52:43 env -- env/env.sh@26 -- # uname 00:06:27.202 04:52:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:27.202 04:52:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:27.202 04:52:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.202 04:52:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.202 04:52:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.202 ************************************ 00:06:27.202 START TEST env_mem_callbacks 00:06:27.202 ************************************ 00:06:27.202 04:52:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:27.202 EAL: Detected CPU lcores: 10 00:06:27.202 EAL: Detected NUMA nodes: 1 00:06:27.202 EAL: Detected shared linkage of DPDK 00:06:27.202 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:27.202 EAL: Selected IOVA mode 'PA' 00:06:27.463 00:06:27.463 00:06:27.463 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.463 http://cunit.sourceforge.net/ 00:06:27.463 00:06:27.463 00:06:27.463 Suite: memory 00:06:27.463 Test: test ... 00:06:27.463 register 0x200000200000 2097152 00:06:27.463 malloc 3145728 00:06:27.463 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:27.463 register 0x200000400000 4194304 00:06:27.463 buf 0x200000500000 len 3145728 PASSED 00:06:27.463 malloc 64 00:06:27.463 buf 0x2000004fff40 len 64 PASSED 00:06:27.463 malloc 4194304 00:06:27.463 register 0x200000800000 6291456 00:06:27.463 buf 0x200000a00000 len 4194304 PASSED 00:06:27.463 free 0x200000500000 3145728 00:06:27.463 free 0x2000004fff40 64 00:06:27.463 unregister 0x200000400000 4194304 PASSED 00:06:27.463 free 0x200000a00000 4194304 00:06:27.463 unregister 0x200000800000 6291456 PASSED 00:06:27.463 malloc 8388608 00:06:27.463 register 0x200000400000 10485760 00:06:27.463 buf 0x200000600000 len 8388608 PASSED 00:06:27.463 free 0x200000600000 8388608 00:06:27.463 unregister 0x200000400000 10485760 PASSED 00:06:27.463 passed 00:06:27.463 00:06:27.463 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.463 suites 1 1 n/a 0 0 00:06:27.463 tests 1 1 1 0 0 00:06:27.463 asserts 15 15 15 0 n/a 00:06:27.463 00:06:27.463 Elapsed time = 0.010 seconds 00:06:27.463 ************************************ 00:06:27.463 END TEST env_mem_callbacks 00:06:27.463 ************************************ 00:06:27.463 00:06:27.463 real 0m0.176s 00:06:27.463 user 0m0.026s 00:06:27.463 sys 0m0.048s 00:06:27.463 04:52:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.463 04:52:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:27.463 ************************************ 00:06:27.463 END TEST env 00:06:27.463 ************************************ 00:06:27.463 00:06:27.464 real 0m3.010s 00:06:27.464 user 0m1.260s 00:06:27.464 sys 0m1.291s 00:06:27.464 04:52:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.464 04:52:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:27.464 04:52:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:27.464 04:52:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.464 04:52:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.464 04:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:27.464 ************************************ 00:06:27.464 START TEST rpc 00:06:27.464 ************************************ 00:06:27.464 04:52:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:27.724 * Looking for test storage... 00:06:27.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:27.724 04:52:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.725 04:52:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.725 04:52:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.725 04:52:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.725 04:52:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.725 04:52:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.725 04:52:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.725 04:52:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.725 04:52:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.725 04:52:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.725 04:52:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.725 04:52:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.725 04:52:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:27.725 04:52:44 rpc -- scripts/common.sh@345 -- # : 1 00:06:27.725 04:52:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.725 04:52:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.725 04:52:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:27.725 04:52:44 rpc -- scripts/common.sh@353 -- # local d=1 00:06:27.725 04:52:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.725 04:52:44 rpc -- scripts/common.sh@355 -- # echo 1 00:06:27.725 04:52:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.725 04:52:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:27.725 04:52:44 rpc -- scripts/common.sh@353 -- # local d=2 00:06:27.725 04:52:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.725 04:52:44 rpc -- scripts/common.sh@355 -- # echo 2 00:06:27.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.725 04:52:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.725 04:52:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.725 04:52:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.725 04:52:44 rpc -- scripts/common.sh@368 -- # return 0 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.725 --rc genhtml_branch_coverage=1 00:06:27.725 --rc genhtml_function_coverage=1 00:06:27.725 --rc genhtml_legend=1 00:06:27.725 --rc geninfo_all_blocks=1 00:06:27.725 --rc geninfo_unexecuted_blocks=1 00:06:27.725 00:06:27.725 ' 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.725 --rc genhtml_branch_coverage=1 00:06:27.725 --rc genhtml_function_coverage=1 00:06:27.725 --rc genhtml_legend=1 00:06:27.725 --rc geninfo_all_blocks=1 00:06:27.725 --rc geninfo_unexecuted_blocks=1 00:06:27.725 00:06:27.725 ' 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.725 --rc genhtml_branch_coverage=1 00:06:27.725 --rc genhtml_function_coverage=1 00:06:27.725 --rc genhtml_legend=1 00:06:27.725 --rc geninfo_all_blocks=1 00:06:27.725 --rc geninfo_unexecuted_blocks=1 00:06:27.725 00:06:27.725 ' 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.725 --rc genhtml_branch_coverage=1 00:06:27.725 --rc genhtml_function_coverage=1 00:06:27.725 --rc genhtml_legend=1 00:06:27.725 --rc geninfo_all_blocks=1 00:06:27.725 --rc geninfo_unexecuted_blocks=1 00:06:27.725 00:06:27.725 ' 00:06:27.725 04:52:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69516 00:06:27.725 04:52:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.725 04:52:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:27.725 04:52:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69516 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 69516 ']' 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.725 04:52:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.725 [2024-11-21 04:52:44.389135] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:27.725 [2024-11-21 04:52:44.389583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69516 ] 00:06:27.986 [2024-11-21 04:52:44.554357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.986 [2024-11-21 04:52:44.595845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:27.986 [2024-11-21 04:52:44.596117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69516' to capture a snapshot of events at runtime. 00:06:27.986 [2024-11-21 04:52:44.596207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.986 [2024-11-21 04:52:44.596241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.986 [2024-11-21 04:52:44.596266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69516 for offline analysis/debug. 00:06:27.986 [2024-11-21 04:52:44.596803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.558 04:52:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.558 04:52:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.558 04:52:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:28.558 04:52:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:28.558 04:52:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:28.558 04:52:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:28.558 04:52:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.558 04:52:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.558 04:52:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.558 ************************************ 00:06:28.558 START TEST rpc_integrity 00:06:28.558 ************************************ 00:06:28.558 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:28.558 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:28.558 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.558 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.558 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.558 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:28.558 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:28.819 { 00:06:28.819 "name": "Malloc0", 00:06:28.819 "aliases": [ 00:06:28.819 "471cee69-d7f8-46be-9ce8-244e846dddac" 00:06:28.819 ], 00:06:28.819 "product_name": "Malloc disk", 00:06:28.819 "block_size": 512, 00:06:28.819 "num_blocks": 16384, 00:06:28.819 "uuid": "471cee69-d7f8-46be-9ce8-244e846dddac", 00:06:28.819 "assigned_rate_limits": { 00:06:28.819 "rw_ios_per_sec": 0, 00:06:28.819 "rw_mbytes_per_sec": 0, 00:06:28.819 "r_mbytes_per_sec": 0, 00:06:28.819 "w_mbytes_per_sec": 0 00:06:28.819 }, 00:06:28.819 "claimed": false, 00:06:28.819 "zoned": false, 00:06:28.819 "supported_io_types": { 00:06:28.819 "read": true, 00:06:28.819 "write": true, 00:06:28.819 "unmap": true, 00:06:28.819 "flush": true, 00:06:28.819 "reset": true, 00:06:28.819 "nvme_admin": false, 00:06:28.819 "nvme_io": false, 00:06:28.819 "nvme_io_md": false, 00:06:28.819 "write_zeroes": true, 00:06:28.819 "zcopy": true, 00:06:28.819 "get_zone_info": false, 00:06:28.819 "zone_management": false, 00:06:28.819 "zone_append": false, 00:06:28.819 "compare": false, 00:06:28.819 "compare_and_write": false, 00:06:28.819 "abort": true, 00:06:28.819 "seek_hole": false, 00:06:28.819 "seek_data": false, 00:06:28.819 "copy": true, 00:06:28.819 "nvme_iov_md": false 00:06:28.819 }, 00:06:28.819 "memory_domains": [ 00:06:28.819 { 00:06:28.819 "dma_device_id": "system", 00:06:28.819 "dma_device_type": 1 00:06:28.819 }, 00:06:28.819 { 00:06:28.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.819 "dma_device_type": 2 00:06:28.819 } 00:06:28.819 ], 00:06:28.819 "driver_specific": {} 00:06:28.819 } 00:06:28.819 ]' 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 [2024-11-21 04:52:45.372574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:28.819 [2024-11-21 04:52:45.372668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.819 [2024-11-21 04:52:45.372701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:28.819 [2024-11-21 04:52:45.372711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.819 [2024-11-21 04:52:45.375394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.819 [2024-11-21 04:52:45.375451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:28.819 Passthru0 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.819 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:28.819 { 00:06:28.819 "name": "Malloc0", 00:06:28.819 "aliases": [ 00:06:28.819 "471cee69-d7f8-46be-9ce8-244e846dddac" 00:06:28.819 ], 00:06:28.819 "product_name": "Malloc disk", 00:06:28.819 "block_size": 512, 00:06:28.819 "num_blocks": 16384, 00:06:28.819 "uuid": "471cee69-d7f8-46be-9ce8-244e846dddac", 00:06:28.819 "assigned_rate_limits": { 00:06:28.819 "rw_ios_per_sec": 0, 00:06:28.819 "rw_mbytes_per_sec": 0, 00:06:28.819 "r_mbytes_per_sec": 0, 00:06:28.819 "w_mbytes_per_sec": 0 00:06:28.819 }, 00:06:28.819 "claimed": true, 00:06:28.819 "claim_type": "exclusive_write", 00:06:28.819 "zoned": false, 00:06:28.819 "supported_io_types": { 00:06:28.819 "read": true, 00:06:28.819 "write": true, 00:06:28.819 "unmap": true, 00:06:28.819 "flush": true, 00:06:28.819 "reset": true, 00:06:28.819 "nvme_admin": false, 00:06:28.819 "nvme_io": false, 00:06:28.819 "nvme_io_md": false, 00:06:28.819 "write_zeroes": true, 00:06:28.819 "zcopy": true, 00:06:28.819 "get_zone_info": false, 00:06:28.819 "zone_management": false, 00:06:28.819 "zone_append": false, 00:06:28.819 "compare": false, 00:06:28.819 "compare_and_write": false, 00:06:28.819 "abort": true, 00:06:28.819 "seek_hole": false, 00:06:28.819 "seek_data": false, 00:06:28.819 "copy": true, 00:06:28.819 "nvme_iov_md": false 00:06:28.819 }, 00:06:28.819 "memory_domains": [ 00:06:28.819 { 00:06:28.819 "dma_device_id": "system", 00:06:28.819 "dma_device_type": 1 00:06:28.819 }, 00:06:28.819 { 00:06:28.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.819 "dma_device_type": 2 00:06:28.819 } 00:06:28.819 ], 00:06:28.819 "driver_specific": {} 00:06:28.819 }, 00:06:28.819 { 00:06:28.819 "name": "Passthru0", 00:06:28.819 "aliases": [ 00:06:28.819 "eb2867d5-59b9-5fa3-877e-43cbad175886" 00:06:28.819 ], 00:06:28.819 "product_name": "passthru", 00:06:28.819 "block_size": 512, 00:06:28.819 "num_blocks": 16384, 00:06:28.819 "uuid": "eb2867d5-59b9-5fa3-877e-43cbad175886", 00:06:28.819 "assigned_rate_limits": { 00:06:28.819 "rw_ios_per_sec": 0, 00:06:28.819 "rw_mbytes_per_sec": 0, 00:06:28.819 "r_mbytes_per_sec": 0, 00:06:28.819 "w_mbytes_per_sec": 0 00:06:28.819 }, 00:06:28.819 "claimed": false, 00:06:28.819 "zoned": false, 00:06:28.819 "supported_io_types": { 00:06:28.819 "read": true, 00:06:28.819 "write": true, 00:06:28.819 "unmap": true, 00:06:28.819 "flush": true, 00:06:28.819 "reset": true, 00:06:28.819 "nvme_admin": false, 00:06:28.819 "nvme_io": false, 00:06:28.819 "nvme_io_md": false, 00:06:28.819 "write_zeroes": true, 00:06:28.819 "zcopy": true, 00:06:28.819 "get_zone_info": false, 00:06:28.819 "zone_management": false, 00:06:28.819 "zone_append": false, 00:06:28.819 "compare": false, 00:06:28.819 "compare_and_write": false, 00:06:28.819 "abort": true, 00:06:28.819 "seek_hole": false, 00:06:28.819 "seek_data": false, 00:06:28.819 "copy": true, 00:06:28.819 "nvme_iov_md": false 00:06:28.819 }, 00:06:28.819 "memory_domains": [ 00:06:28.819 { 00:06:28.819 "dma_device_id": "system", 00:06:28.819 "dma_device_type": 1 00:06:28.819 }, 00:06:28.819 { 00:06:28.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.819 "dma_device_type": 2 00:06:28.819 } 00:06:28.819 ], 00:06:28.819 "driver_specific": { 00:06:28.819 "passthru": { 00:06:28.819 "name": "Passthru0", 00:06:28.819 "base_bdev_name": "Malloc0" 00:06:28.819 } 00:06:28.819 } 00:06:28.819 } 00:06:28.819 ]' 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:28.819 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:28.820 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.820 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.820 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.820 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:28.820 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:28.820 ************************************ 00:06:28.820 END TEST rpc_integrity 00:06:28.820 ************************************ 00:06:28.820 04:52:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:28.820 00:06:28.820 real 0m0.237s 00:06:28.820 user 0m0.133s 00:06:28.820 sys 0m0.036s 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.820 04:52:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.820 04:52:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:28.820 04:52:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.820 04:52:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.820 04:52:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.820 ************************************ 00:06:28.820 START TEST rpc_plugins 00:06:28.820 ************************************ 00:06:28.820 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:29.081 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:29.081 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.081 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.081 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.081 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:29.081 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:29.081 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.081 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.081 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.081 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:29.081 { 00:06:29.081 "name": "Malloc1", 00:06:29.081 "aliases": [ 00:06:29.081 "eff2ecd3-5e23-471b-a10b-196249d3e2bb" 00:06:29.081 ], 00:06:29.081 "product_name": "Malloc disk", 00:06:29.081 "block_size": 4096, 00:06:29.081 "num_blocks": 256, 00:06:29.081 "uuid": "eff2ecd3-5e23-471b-a10b-196249d3e2bb", 00:06:29.081 "assigned_rate_limits": { 00:06:29.081 "rw_ios_per_sec": 0, 00:06:29.081 "rw_mbytes_per_sec": 0, 00:06:29.081 "r_mbytes_per_sec": 0, 00:06:29.081 "w_mbytes_per_sec": 0 00:06:29.082 }, 00:06:29.082 "claimed": false, 00:06:29.082 "zoned": false, 00:06:29.082 "supported_io_types": { 00:06:29.082 "read": true, 00:06:29.082 "write": true, 00:06:29.082 "unmap": true, 00:06:29.082 "flush": true, 00:06:29.082 "reset": true, 00:06:29.082 "nvme_admin": false, 00:06:29.082 "nvme_io": false, 00:06:29.082 "nvme_io_md": false, 00:06:29.082 "write_zeroes": true, 00:06:29.082 "zcopy": true, 00:06:29.082 "get_zone_info": false, 00:06:29.082 "zone_management": false, 00:06:29.082 "zone_append": false, 00:06:29.082 "compare": false, 00:06:29.082 "compare_and_write": false, 00:06:29.082 "abort": true, 00:06:29.082 "seek_hole": false, 00:06:29.082 "seek_data": false, 00:06:29.082 "copy": true, 00:06:29.082 "nvme_iov_md": false 00:06:29.082 }, 00:06:29.082 "memory_domains": [ 00:06:29.082 { 00:06:29.082 "dma_device_id": "system", 00:06:29.082 "dma_device_type": 1 00:06:29.082 }, 00:06:29.082 { 00:06:29.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.082 "dma_device_type": 2 00:06:29.082 } 00:06:29.082 ], 00:06:29.082 "driver_specific": {} 00:06:29.082 } 00:06:29.082 ]' 00:06:29.082 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:29.082 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:29.082 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.082 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.082 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:29.082 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:29.082 ************************************ 00:06:29.082 END TEST rpc_plugins 00:06:29.082 ************************************ 00:06:29.082 04:52:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:29.082 00:06:29.082 real 0m0.124s 00:06:29.082 user 0m0.066s 00:06:29.082 sys 0m0.017s 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.082 04:52:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.082 04:52:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:29.082 04:52:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.082 04:52:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.082 04:52:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.082 ************************************ 00:06:29.082 START TEST rpc_trace_cmd_test 00:06:29.082 ************************************ 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:29.082 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69516", 00:06:29.082 "tpoint_group_mask": "0x8", 00:06:29.082 "iscsi_conn": { 00:06:29.082 "mask": "0x2", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "scsi": { 00:06:29.082 "mask": "0x4", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "bdev": { 00:06:29.082 "mask": "0x8", 00:06:29.082 "tpoint_mask": "0xffffffffffffffff" 00:06:29.082 }, 00:06:29.082 "nvmf_rdma": { 00:06:29.082 "mask": "0x10", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "nvmf_tcp": { 00:06:29.082 "mask": "0x20", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "ftl": { 00:06:29.082 "mask": "0x40", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "blobfs": { 00:06:29.082 "mask": "0x80", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "dsa": { 00:06:29.082 "mask": "0x200", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "thread": { 00:06:29.082 "mask": "0x400", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "nvme_pcie": { 00:06:29.082 "mask": "0x800", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "iaa": { 00:06:29.082 "mask": "0x1000", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "nvme_tcp": { 00:06:29.082 "mask": "0x2000", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "bdev_nvme": { 00:06:29.082 "mask": "0x4000", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "sock": { 00:06:29.082 "mask": "0x8000", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "blob": { 00:06:29.082 "mask": "0x10000", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "bdev_raid": { 00:06:29.082 "mask": "0x20000", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 }, 00:06:29.082 "scheduler": { 00:06:29.082 "mask": "0x40000", 00:06:29.082 "tpoint_mask": "0x0" 00:06:29.082 } 00:06:29.082 }' 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:29.082 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:29.343 ************************************ 00:06:29.343 END TEST rpc_trace_cmd_test 00:06:29.343 ************************************ 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:29.343 00:06:29.343 real 0m0.168s 00:06:29.343 user 0m0.136s 00:06:29.343 sys 0m0.021s 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.343 04:52:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.343 04:52:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:29.343 04:52:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:29.343 04:52:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:29.343 04:52:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.343 04:52:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.343 04:52:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.343 ************************************ 00:06:29.343 START TEST rpc_daemon_integrity 00:06:29.343 ************************************ 00:06:29.344 04:52:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:29.344 04:52:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:29.344 04:52:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.344 04:52:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 04:52:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.344 04:52:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:29.344 04:52:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:29.344 { 00:06:29.344 "name": "Malloc2", 00:06:29.344 "aliases": [ 00:06:29.344 "b7ba6d0a-1c44-4350-b274-b6401451efc5" 00:06:29.344 ], 00:06:29.344 "product_name": "Malloc disk", 00:06:29.344 "block_size": 512, 00:06:29.344 "num_blocks": 16384, 00:06:29.344 "uuid": "b7ba6d0a-1c44-4350-b274-b6401451efc5", 00:06:29.344 "assigned_rate_limits": { 00:06:29.344 "rw_ios_per_sec": 0, 00:06:29.344 "rw_mbytes_per_sec": 0, 00:06:29.344 "r_mbytes_per_sec": 0, 00:06:29.344 "w_mbytes_per_sec": 0 00:06:29.344 }, 00:06:29.344 "claimed": false, 00:06:29.344 "zoned": false, 00:06:29.344 "supported_io_types": { 00:06:29.344 "read": true, 00:06:29.344 "write": true, 00:06:29.344 "unmap": true, 00:06:29.344 "flush": true, 00:06:29.344 "reset": true, 00:06:29.344 "nvme_admin": false, 00:06:29.344 "nvme_io": false, 00:06:29.344 "nvme_io_md": false, 00:06:29.344 "write_zeroes": true, 00:06:29.344 "zcopy": true, 00:06:29.344 "get_zone_info": false, 00:06:29.344 "zone_management": false, 00:06:29.344 "zone_append": false, 00:06:29.344 "compare": false, 00:06:29.344 "compare_and_write": false, 00:06:29.344 "abort": true, 00:06:29.344 "seek_hole": false, 00:06:29.344 "seek_data": false, 00:06:29.344 "copy": true, 00:06:29.344 "nvme_iov_md": false 00:06:29.344 }, 00:06:29.344 "memory_domains": [ 00:06:29.344 { 00:06:29.344 "dma_device_id": "system", 00:06:29.344 "dma_device_type": 1 00:06:29.344 }, 00:06:29.344 { 00:06:29.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.344 "dma_device_type": 2 00:06:29.344 } 00:06:29.344 ], 00:06:29.344 "driver_specific": {} 00:06:29.344 } 00:06:29.344 ]' 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.344 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.604 [2024-11-21 04:52:46.077789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:29.604 [2024-11-21 04:52:46.078000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.604 [2024-11-21 04:52:46.078046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:29.604 [2024-11-21 04:52:46.078058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.604 [2024-11-21 04:52:46.080746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.604 [2024-11-21 04:52:46.080785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:29.604 Passthru0 00:06:29.604 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.604 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:29.604 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.604 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.604 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.604 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:29.604 { 00:06:29.604 "name": "Malloc2", 00:06:29.604 "aliases": [ 00:06:29.604 "b7ba6d0a-1c44-4350-b274-b6401451efc5" 00:06:29.604 ], 00:06:29.604 "product_name": "Malloc disk", 00:06:29.604 "block_size": 512, 00:06:29.604 "num_blocks": 16384, 00:06:29.604 "uuid": "b7ba6d0a-1c44-4350-b274-b6401451efc5", 00:06:29.604 "assigned_rate_limits": { 00:06:29.604 "rw_ios_per_sec": 0, 00:06:29.604 "rw_mbytes_per_sec": 0, 00:06:29.604 "r_mbytes_per_sec": 0, 00:06:29.604 "w_mbytes_per_sec": 0 00:06:29.604 }, 00:06:29.604 "claimed": true, 00:06:29.604 "claim_type": "exclusive_write", 00:06:29.605 "zoned": false, 00:06:29.605 "supported_io_types": { 00:06:29.605 "read": true, 00:06:29.605 "write": true, 00:06:29.605 "unmap": true, 00:06:29.605 "flush": true, 00:06:29.605 "reset": true, 00:06:29.605 "nvme_admin": false, 00:06:29.605 "nvme_io": false, 00:06:29.605 "nvme_io_md": false, 00:06:29.605 "write_zeroes": true, 00:06:29.605 "zcopy": true, 00:06:29.605 "get_zone_info": false, 00:06:29.605 "zone_management": false, 00:06:29.605 "zone_append": false, 00:06:29.605 "compare": false, 00:06:29.605 "compare_and_write": false, 00:06:29.605 "abort": true, 00:06:29.605 "seek_hole": false, 00:06:29.605 "seek_data": false, 00:06:29.605 "copy": true, 00:06:29.605 "nvme_iov_md": false 00:06:29.605 }, 00:06:29.605 "memory_domains": [ 00:06:29.605 { 00:06:29.605 "dma_device_id": "system", 00:06:29.605 "dma_device_type": 1 00:06:29.605 }, 00:06:29.605 { 00:06:29.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.605 "dma_device_type": 2 00:06:29.605 } 00:06:29.605 ], 00:06:29.605 "driver_specific": {} 00:06:29.605 }, 00:06:29.605 { 00:06:29.605 "name": "Passthru0", 00:06:29.605 "aliases": [ 00:06:29.605 "89fdac42-9286-5d65-8b80-40444e70fbfd" 00:06:29.605 ], 00:06:29.605 "product_name": "passthru", 00:06:29.605 "block_size": 512, 00:06:29.605 "num_blocks": 16384, 00:06:29.605 "uuid": "89fdac42-9286-5d65-8b80-40444e70fbfd", 00:06:29.605 "assigned_rate_limits": { 00:06:29.605 "rw_ios_per_sec": 0, 00:06:29.605 "rw_mbytes_per_sec": 0, 00:06:29.605 "r_mbytes_per_sec": 0, 00:06:29.605 "w_mbytes_per_sec": 0 00:06:29.605 }, 00:06:29.605 "claimed": false, 00:06:29.605 "zoned": false, 00:06:29.605 "supported_io_types": { 00:06:29.605 "read": true, 00:06:29.605 "write": true, 00:06:29.605 "unmap": true, 00:06:29.605 "flush": true, 00:06:29.605 "reset": true, 00:06:29.605 "nvme_admin": false, 00:06:29.605 "nvme_io": false, 00:06:29.605 "nvme_io_md": false, 00:06:29.605 "write_zeroes": true, 00:06:29.605 "zcopy": true, 00:06:29.605 "get_zone_info": false, 00:06:29.605 "zone_management": false, 00:06:29.605 "zone_append": false, 00:06:29.605 "compare": false, 00:06:29.605 "compare_and_write": false, 00:06:29.605 "abort": true, 00:06:29.605 "seek_hole": false, 00:06:29.605 "seek_data": false, 00:06:29.605 "copy": true, 00:06:29.605 "nvme_iov_md": false 00:06:29.605 }, 00:06:29.605 "memory_domains": [ 00:06:29.605 { 00:06:29.605 "dma_device_id": "system", 00:06:29.605 "dma_device_type": 1 00:06:29.605 }, 00:06:29.605 { 00:06:29.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.605 "dma_device_type": 2 00:06:29.605 } 00:06:29.605 ], 00:06:29.605 "driver_specific": { 00:06:29.605 "passthru": { 00:06:29.605 "name": "Passthru0", 00:06:29.605 "base_bdev_name": "Malloc2" 00:06:29.605 } 00:06:29.605 } 00:06:29.605 } 00:06:29.605 ]' 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:29.605 ************************************ 00:06:29.605 END TEST rpc_daemon_integrity 00:06:29.605 ************************************ 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:29.605 00:06:29.605 real 0m0.234s 00:06:29.605 user 0m0.130s 00:06:29.605 sys 0m0.032s 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.605 04:52:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.605 04:52:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:29.605 04:52:46 rpc -- rpc/rpc.sh@84 -- # killprocess 69516 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 69516 ']' 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@958 -- # kill -0 69516 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69516 00:06:29.605 killing process with pid 69516 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69516' 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@973 -- # kill 69516 00:06:29.605 04:52:46 rpc -- common/autotest_common.sh@978 -- # wait 69516 00:06:30.177 00:06:30.177 real 0m2.632s 00:06:30.177 user 0m2.893s 00:06:30.177 sys 0m0.808s 00:06:30.177 04:52:46 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.177 ************************************ 00:06:30.177 END TEST rpc 00:06:30.177 ************************************ 00:06:30.177 04:52:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.177 04:52:46 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:30.177 04:52:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.177 04:52:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.177 04:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:30.177 ************************************ 00:06:30.177 START TEST skip_rpc 00:06:30.177 ************************************ 00:06:30.177 04:52:46 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:30.177 * Looking for test storage... 00:06:30.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:30.177 04:52:46 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.439 04:52:46 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.439 04:52:46 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.439 04:52:46 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.439 04:52:46 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.440 04:52:46 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.440 --rc genhtml_branch_coverage=1 00:06:30.440 --rc genhtml_function_coverage=1 00:06:30.440 --rc genhtml_legend=1 00:06:30.440 --rc geninfo_all_blocks=1 00:06:30.440 --rc geninfo_unexecuted_blocks=1 00:06:30.440 00:06:30.440 ' 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.440 --rc genhtml_branch_coverage=1 00:06:30.440 --rc genhtml_function_coverage=1 00:06:30.440 --rc genhtml_legend=1 00:06:30.440 --rc geninfo_all_blocks=1 00:06:30.440 --rc geninfo_unexecuted_blocks=1 00:06:30.440 00:06:30.440 ' 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.440 --rc genhtml_branch_coverage=1 00:06:30.440 --rc genhtml_function_coverage=1 00:06:30.440 --rc genhtml_legend=1 00:06:30.440 --rc geninfo_all_blocks=1 00:06:30.440 --rc geninfo_unexecuted_blocks=1 00:06:30.440 00:06:30.440 ' 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.440 --rc genhtml_branch_coverage=1 00:06:30.440 --rc genhtml_function_coverage=1 00:06:30.440 --rc genhtml_legend=1 00:06:30.440 --rc geninfo_all_blocks=1 00:06:30.440 --rc geninfo_unexecuted_blocks=1 00:06:30.440 00:06:30.440 ' 00:06:30.440 04:52:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:30.440 04:52:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:30.440 04:52:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.440 04:52:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.440 ************************************ 00:06:30.440 START TEST skip_rpc 00:06:30.440 ************************************ 00:06:30.440 04:52:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:30.440 04:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69718 00:06:30.440 04:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.440 04:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:30.440 04:52:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:30.440 [2024-11-21 04:52:47.099586] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:30.440 [2024-11-21 04:52:47.099945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69718 ] 00:06:30.702 [2024-11-21 04:52:47.267312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.702 [2024-11-21 04:52:47.304662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69718 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69718 ']' 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69718 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69718 00:06:35.980 killing process with pid 69718 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69718' 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69718 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69718 00:06:35.980 ************************************ 00:06:35.980 END TEST skip_rpc 00:06:35.980 ************************************ 00:06:35.980 00:06:35.980 real 0m5.332s 00:06:35.980 user 0m4.839s 00:06:35.980 sys 0m0.391s 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.980 04:52:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.980 04:52:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:35.980 04:52:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.980 04:52:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.980 04:52:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.980 ************************************ 00:06:35.980 START TEST skip_rpc_with_json 00:06:35.980 ************************************ 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69805 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69805 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69805 ']' 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.980 04:52:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.980 [2024-11-21 04:52:52.475000] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:35.980 [2024-11-21 04:52:52.475482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69805 ] 00:06:35.980 [2024-11-21 04:52:52.635580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.980 [2024-11-21 04:52:52.671010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.916 [2024-11-21 04:52:53.310400] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:36.916 request: 00:06:36.916 { 00:06:36.916 "trtype": "tcp", 00:06:36.916 "method": "nvmf_get_transports", 00:06:36.916 "req_id": 1 00:06:36.916 } 00:06:36.916 Got JSON-RPC error response 00:06:36.916 response: 00:06:36.916 { 00:06:36.916 "code": -19, 00:06:36.916 "message": "No such device" 00:06:36.916 } 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.916 [2024-11-21 04:52:53.322497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.916 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.916 { 00:06:36.916 "subsystems": [ 00:06:36.916 { 00:06:36.916 "subsystem": "fsdev", 00:06:36.916 "config": [ 00:06:36.916 { 00:06:36.917 "method": "fsdev_set_opts", 00:06:36.917 "params": { 00:06:36.917 "fsdev_io_pool_size": 65535, 00:06:36.917 "fsdev_io_cache_size": 256 00:06:36.917 } 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "keyring", 00:06:36.917 "config": [] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "iobuf", 00:06:36.917 "config": [ 00:06:36.917 { 00:06:36.917 "method": "iobuf_set_options", 00:06:36.917 "params": { 00:06:36.917 "small_pool_count": 8192, 00:06:36.917 "large_pool_count": 1024, 00:06:36.917 "small_bufsize": 8192, 00:06:36.917 "large_bufsize": 135168, 00:06:36.917 "enable_numa": false 00:06:36.917 } 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "sock", 00:06:36.917 "config": [ 00:06:36.917 { 00:06:36.917 "method": "sock_set_default_impl", 00:06:36.917 "params": { 00:06:36.917 "impl_name": "posix" 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "sock_impl_set_options", 00:06:36.917 "params": { 00:06:36.917 "impl_name": "ssl", 00:06:36.917 "recv_buf_size": 4096, 00:06:36.917 "send_buf_size": 4096, 00:06:36.917 "enable_recv_pipe": true, 00:06:36.917 "enable_quickack": false, 00:06:36.917 "enable_placement_id": 0, 00:06:36.917 "enable_zerocopy_send_server": true, 00:06:36.917 "enable_zerocopy_send_client": false, 00:06:36.917 "zerocopy_threshold": 0, 00:06:36.917 "tls_version": 0, 00:06:36.917 "enable_ktls": false 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "sock_impl_set_options", 00:06:36.917 "params": { 00:06:36.917 "impl_name": "posix", 00:06:36.917 "recv_buf_size": 2097152, 00:06:36.917 "send_buf_size": 2097152, 00:06:36.917 "enable_recv_pipe": true, 00:06:36.917 "enable_quickack": false, 00:06:36.917 "enable_placement_id": 0, 00:06:36.917 "enable_zerocopy_send_server": true, 00:06:36.917 "enable_zerocopy_send_client": false, 00:06:36.917 "zerocopy_threshold": 0, 00:06:36.917 "tls_version": 0, 00:06:36.917 "enable_ktls": false 00:06:36.917 } 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "vmd", 00:06:36.917 "config": [] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "accel", 00:06:36.917 "config": [ 00:06:36.917 { 00:06:36.917 "method": "accel_set_options", 00:06:36.917 "params": { 00:06:36.917 "small_cache_size": 128, 00:06:36.917 "large_cache_size": 16, 00:06:36.917 "task_count": 2048, 00:06:36.917 "sequence_count": 2048, 00:06:36.917 "buf_count": 2048 00:06:36.917 } 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "bdev", 00:06:36.917 "config": [ 00:06:36.917 { 00:06:36.917 "method": "bdev_set_options", 00:06:36.917 "params": { 00:06:36.917 "bdev_io_pool_size": 65535, 00:06:36.917 "bdev_io_cache_size": 256, 00:06:36.917 "bdev_auto_examine": true, 00:06:36.917 "iobuf_small_cache_size": 128, 00:06:36.917 "iobuf_large_cache_size": 16 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "bdev_raid_set_options", 00:06:36.917 "params": { 00:06:36.917 "process_window_size_kb": 1024, 00:06:36.917 "process_max_bandwidth_mb_sec": 0 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "bdev_iscsi_set_options", 00:06:36.917 "params": { 00:06:36.917 "timeout_sec": 30 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "bdev_nvme_set_options", 00:06:36.917 "params": { 00:06:36.917 "action_on_timeout": "none", 00:06:36.917 "timeout_us": 0, 00:06:36.917 "timeout_admin_us": 0, 00:06:36.917 "keep_alive_timeout_ms": 10000, 00:06:36.917 "arbitration_burst": 0, 00:06:36.917 "low_priority_weight": 0, 00:06:36.917 "medium_priority_weight": 0, 00:06:36.917 "high_priority_weight": 0, 00:06:36.917 "nvme_adminq_poll_period_us": 10000, 00:06:36.917 "nvme_ioq_poll_period_us": 0, 00:06:36.917 "io_queue_requests": 0, 00:06:36.917 "delay_cmd_submit": true, 00:06:36.917 "transport_retry_count": 4, 00:06:36.917 "bdev_retry_count": 3, 00:06:36.917 "transport_ack_timeout": 0, 00:06:36.917 "ctrlr_loss_timeout_sec": 0, 00:06:36.917 "reconnect_delay_sec": 0, 00:06:36.917 "fast_io_fail_timeout_sec": 0, 00:06:36.917 "disable_auto_failback": false, 00:06:36.917 "generate_uuids": false, 00:06:36.917 "transport_tos": 0, 00:06:36.917 "nvme_error_stat": false, 00:06:36.917 "rdma_srq_size": 0, 00:06:36.917 "io_path_stat": false, 00:06:36.917 "allow_accel_sequence": false, 00:06:36.917 "rdma_max_cq_size": 0, 00:06:36.917 "rdma_cm_event_timeout_ms": 0, 00:06:36.917 "dhchap_digests": [ 00:06:36.917 "sha256", 00:06:36.917 "sha384", 00:06:36.917 "sha512" 00:06:36.917 ], 00:06:36.917 "dhchap_dhgroups": [ 00:06:36.917 "null", 00:06:36.917 "ffdhe2048", 00:06:36.917 "ffdhe3072", 00:06:36.917 "ffdhe4096", 00:06:36.917 "ffdhe6144", 00:06:36.917 "ffdhe8192" 00:06:36.917 ] 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "bdev_nvme_set_hotplug", 00:06:36.917 "params": { 00:06:36.917 "period_us": 100000, 00:06:36.917 "enable": false 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "bdev_wait_for_examine" 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "scsi", 00:06:36.917 "config": null 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "scheduler", 00:06:36.917 "config": [ 00:06:36.917 { 00:06:36.917 "method": "framework_set_scheduler", 00:06:36.917 "params": { 00:06:36.917 "name": "static" 00:06:36.917 } 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "vhost_scsi", 00:06:36.917 "config": [] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "vhost_blk", 00:06:36.917 "config": [] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "ublk", 00:06:36.917 "config": [] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "nbd", 00:06:36.917 "config": [] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "nvmf", 00:06:36.917 "config": [ 00:06:36.917 { 00:06:36.917 "method": "nvmf_set_config", 00:06:36.917 "params": { 00:06:36.917 "discovery_filter": "match_any", 00:06:36.917 "admin_cmd_passthru": { 00:06:36.917 "identify_ctrlr": false 00:06:36.917 }, 00:06:36.917 "dhchap_digests": [ 00:06:36.917 "sha256", 00:06:36.917 "sha384", 00:06:36.917 "sha512" 00:06:36.917 ], 00:06:36.917 "dhchap_dhgroups": [ 00:06:36.917 "null", 00:06:36.917 "ffdhe2048", 00:06:36.917 "ffdhe3072", 00:06:36.917 "ffdhe4096", 00:06:36.917 "ffdhe6144", 00:06:36.917 "ffdhe8192" 00:06:36.917 ] 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "nvmf_set_max_subsystems", 00:06:36.917 "params": { 00:06:36.917 "max_subsystems": 1024 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "nvmf_set_crdt", 00:06:36.917 "params": { 00:06:36.917 "crdt1": 0, 00:06:36.917 "crdt2": 0, 00:06:36.917 "crdt3": 0 00:06:36.917 } 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "method": "nvmf_create_transport", 00:06:36.917 "params": { 00:06:36.917 "trtype": "TCP", 00:06:36.917 "max_queue_depth": 128, 00:06:36.917 "max_io_qpairs_per_ctrlr": 127, 00:06:36.917 "in_capsule_data_size": 4096, 00:06:36.917 "max_io_size": 131072, 00:06:36.917 "io_unit_size": 131072, 00:06:36.917 "max_aq_depth": 128, 00:06:36.917 "num_shared_buffers": 511, 00:06:36.917 "buf_cache_size": 4294967295, 00:06:36.917 "dif_insert_or_strip": false, 00:06:36.917 "zcopy": false, 00:06:36.917 "c2h_success": true, 00:06:36.917 "sock_priority": 0, 00:06:36.917 "abort_timeout_sec": 1, 00:06:36.917 "ack_timeout": 0, 00:06:36.917 "data_wr_pool_size": 0 00:06:36.917 } 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 }, 00:06:36.917 { 00:06:36.917 "subsystem": "iscsi", 00:06:36.917 "config": [ 00:06:36.917 { 00:06:36.917 "method": "iscsi_set_options", 00:06:36.917 "params": { 00:06:36.917 "node_base": "iqn.2016-06.io.spdk", 00:06:36.917 "max_sessions": 128, 00:06:36.917 "max_connections_per_session": 2, 00:06:36.917 "max_queue_depth": 64, 00:06:36.917 "default_time2wait": 2, 00:06:36.917 "default_time2retain": 20, 00:06:36.917 "first_burst_length": 8192, 00:06:36.917 "immediate_data": true, 00:06:36.917 "allow_duplicated_isid": false, 00:06:36.917 "error_recovery_level": 0, 00:06:36.917 "nop_timeout": 60, 00:06:36.917 "nop_in_interval": 30, 00:06:36.917 "disable_chap": false, 00:06:36.917 "require_chap": false, 00:06:36.917 "mutual_chap": false, 00:06:36.917 "chap_group": 0, 00:06:36.917 "max_large_datain_per_connection": 64, 00:06:36.917 "max_r2t_per_connection": 4, 00:06:36.917 "pdu_pool_size": 36864, 00:06:36.917 "immediate_data_pool_size": 16384, 00:06:36.917 "data_out_pool_size": 2048 00:06:36.917 } 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 } 00:06:36.917 ] 00:06:36.917 } 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69805 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69805 ']' 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69805 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69805 00:06:36.918 killing process with pid 69805 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69805' 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69805 00:06:36.918 04:52:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69805 00:06:37.179 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69828 00:06:37.179 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:37.179 04:52:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69828 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69828 ']' 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69828 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69828 00:06:42.447 killing process with pid 69828 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69828' 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69828 00:06:42.447 04:52:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69828 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:42.447 00:06:42.447 real 0m6.731s 00:06:42.447 user 0m6.293s 00:06:42.447 sys 0m0.665s 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.447 ************************************ 00:06:42.447 END TEST skip_rpc_with_json 00:06:42.447 ************************************ 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.447 04:52:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:42.447 04:52:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.447 04:52:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.447 04:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.447 ************************************ 00:06:42.447 START TEST skip_rpc_with_delay 00:06:42.447 ************************************ 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:42.447 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.708 [2024-11-21 04:52:59.249547] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:42.708 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:42.708 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.708 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.708 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.708 00:06:42.708 real 0m0.122s 00:06:42.708 user 0m0.069s 00:06:42.708 sys 0m0.052s 00:06:42.708 ************************************ 00:06:42.708 END TEST skip_rpc_with_delay 00:06:42.708 ************************************ 00:06:42.708 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.708 04:52:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:42.708 04:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:42.708 04:52:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:42.708 04:52:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:42.708 04:52:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.708 04:52:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.708 04:52:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.708 ************************************ 00:06:42.708 START TEST exit_on_failed_rpc_init 00:06:42.708 ************************************ 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:42.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69940 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69940 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69940 ']' 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:42.708 04:52:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.708 [2024-11-21 04:52:59.424458] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:42.708 [2024-11-21 04:52:59.424577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69940 ] 00:06:42.967 [2024-11-21 04:52:59.579813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.967 [2024-11-21 04:52:59.602258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:43.533 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.792 [2024-11-21 04:53:00.326035] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:43.792 [2024-11-21 04:53:00.326592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69958 ] 00:06:43.792 [2024-11-21 04:53:00.484470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.792 [2024-11-21 04:53:00.508800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.792 [2024-11-21 04:53:00.508884] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:43.792 [2024-11-21 04:53:00.508906] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:43.792 [2024-11-21 04:53:00.508919] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69940 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69940 ']' 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69940 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69940 00:06:44.050 killing process with pid 69940 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69940' 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69940 00:06:44.050 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69940 00:06:44.309 ************************************ 00:06:44.310 END TEST exit_on_failed_rpc_init 00:06:44.310 ************************************ 00:06:44.310 00:06:44.310 real 0m1.538s 00:06:44.310 user 0m1.660s 00:06:44.310 sys 0m0.403s 00:06:44.310 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.310 04:53:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:44.310 04:53:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:44.310 00:06:44.310 real 0m14.107s 00:06:44.310 user 0m13.006s 00:06:44.310 sys 0m1.705s 00:06:44.310 04:53:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.310 ************************************ 00:06:44.310 END TEST skip_rpc 00:06:44.310 ************************************ 00:06:44.310 04:53:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.310 04:53:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:44.310 04:53:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.310 04:53:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.310 04:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:44.310 ************************************ 00:06:44.310 START TEST rpc_client 00:06:44.310 ************************************ 00:06:44.310 04:53:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:44.569 * Looking for test storage... 00:06:44.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.569 04:53:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.569 --rc genhtml_branch_coverage=1 00:06:44.569 --rc genhtml_function_coverage=1 00:06:44.569 --rc genhtml_legend=1 00:06:44.569 --rc geninfo_all_blocks=1 00:06:44.569 --rc geninfo_unexecuted_blocks=1 00:06:44.569 00:06:44.569 ' 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.569 --rc genhtml_branch_coverage=1 00:06:44.569 --rc genhtml_function_coverage=1 00:06:44.569 --rc genhtml_legend=1 00:06:44.569 --rc geninfo_all_blocks=1 00:06:44.569 --rc geninfo_unexecuted_blocks=1 00:06:44.569 00:06:44.569 ' 00:06:44.569 04:53:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.570 --rc genhtml_branch_coverage=1 00:06:44.570 --rc genhtml_function_coverage=1 00:06:44.570 --rc genhtml_legend=1 00:06:44.570 --rc geninfo_all_blocks=1 00:06:44.570 --rc geninfo_unexecuted_blocks=1 00:06:44.570 00:06:44.570 ' 00:06:44.570 04:53:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.570 --rc genhtml_branch_coverage=1 00:06:44.570 --rc genhtml_function_coverage=1 00:06:44.570 --rc genhtml_legend=1 00:06:44.570 --rc geninfo_all_blocks=1 00:06:44.570 --rc geninfo_unexecuted_blocks=1 00:06:44.570 00:06:44.570 ' 00:06:44.570 04:53:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:44.570 OK 00:06:44.570 04:53:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:44.570 00:06:44.570 real 0m0.176s 00:06:44.570 user 0m0.098s 00:06:44.570 sys 0m0.081s 00:06:44.570 04:53:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.570 04:53:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:44.570 ************************************ 00:06:44.570 END TEST rpc_client 00:06:44.570 ************************************ 00:06:44.570 04:53:01 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:44.570 04:53:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.570 04:53:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.570 04:53:01 -- common/autotest_common.sh@10 -- # set +x 00:06:44.570 ************************************ 00:06:44.570 START TEST json_config 00:06:44.570 ************************************ 00:06:44.570 04:53:01 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:44.570 04:53:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.570 04:53:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.570 04:53:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.830 04:53:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.830 04:53:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.830 04:53:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.830 04:53:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.830 04:53:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.830 04:53:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.830 04:53:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.830 04:53:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:44.830 04:53:01 json_config -- scripts/common.sh@345 -- # : 1 00:06:44.830 04:53:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.830 04:53:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.830 04:53:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:44.830 04:53:01 json_config -- scripts/common.sh@353 -- # local d=1 00:06:44.830 04:53:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.830 04:53:01 json_config -- scripts/common.sh@355 -- # echo 1 00:06:44.830 04:53:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.830 04:53:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@353 -- # local d=2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.830 04:53:01 json_config -- scripts/common.sh@355 -- # echo 2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.830 04:53:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.830 04:53:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.830 04:53:01 json_config -- scripts/common.sh@368 -- # return 0 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.830 --rc genhtml_branch_coverage=1 00:06:44.830 --rc genhtml_function_coverage=1 00:06:44.830 --rc genhtml_legend=1 00:06:44.830 --rc geninfo_all_blocks=1 00:06:44.830 --rc geninfo_unexecuted_blocks=1 00:06:44.830 00:06:44.830 ' 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.830 --rc genhtml_branch_coverage=1 00:06:44.830 --rc genhtml_function_coverage=1 00:06:44.830 --rc genhtml_legend=1 00:06:44.830 --rc geninfo_all_blocks=1 00:06:44.830 --rc geninfo_unexecuted_blocks=1 00:06:44.830 00:06:44.830 ' 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.830 --rc genhtml_branch_coverage=1 00:06:44.830 --rc genhtml_function_coverage=1 00:06:44.830 --rc genhtml_legend=1 00:06:44.830 --rc geninfo_all_blocks=1 00:06:44.830 --rc geninfo_unexecuted_blocks=1 00:06:44.830 00:06:44.830 ' 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.830 --rc genhtml_branch_coverage=1 00:06:44.830 --rc genhtml_function_coverage=1 00:06:44.830 --rc genhtml_legend=1 00:06:44.830 --rc geninfo_all_blocks=1 00:06:44.830 --rc geninfo_unexecuted_blocks=1 00:06:44.830 00:06:44.830 ' 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dbf62c29-9d5f-4666-9cee-22902cff7e75 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=dbf62c29-9d5f-4666-9cee-22902cff7e75 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.830 04:53:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.830 04:53:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.830 04:53:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.830 04:53:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.830 04:53:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.830 04:53:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.830 04:53:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.830 04:53:01 json_config -- paths/export.sh@5 -- # export PATH 00:06:44.830 04:53:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@51 -- # : 0 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.830 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.830 04:53:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:44.830 WARNING: No tests are enabled so not running JSON configuration tests 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:44.830 04:53:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:44.830 00:06:44.830 real 0m0.153s 00:06:44.830 user 0m0.098s 00:06:44.830 sys 0m0.055s 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.830 04:53:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.830 ************************************ 00:06:44.830 END TEST json_config 00:06:44.831 ************************************ 00:06:44.831 04:53:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:44.831 04:53:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.831 04:53:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.831 04:53:01 -- common/autotest_common.sh@10 -- # set +x 00:06:44.831 ************************************ 00:06:44.831 START TEST json_config_extra_key 00:06:44.831 ************************************ 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.831 --rc genhtml_branch_coverage=1 00:06:44.831 --rc genhtml_function_coverage=1 00:06:44.831 --rc genhtml_legend=1 00:06:44.831 --rc geninfo_all_blocks=1 00:06:44.831 --rc geninfo_unexecuted_blocks=1 00:06:44.831 00:06:44.831 ' 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.831 --rc genhtml_branch_coverage=1 00:06:44.831 --rc genhtml_function_coverage=1 00:06:44.831 --rc genhtml_legend=1 00:06:44.831 --rc geninfo_all_blocks=1 00:06:44.831 --rc geninfo_unexecuted_blocks=1 00:06:44.831 00:06:44.831 ' 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.831 --rc genhtml_branch_coverage=1 00:06:44.831 --rc genhtml_function_coverage=1 00:06:44.831 --rc genhtml_legend=1 00:06:44.831 --rc geninfo_all_blocks=1 00:06:44.831 --rc geninfo_unexecuted_blocks=1 00:06:44.831 00:06:44.831 ' 00:06:44.831 04:53:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.831 --rc genhtml_branch_coverage=1 00:06:44.831 --rc genhtml_function_coverage=1 00:06:44.831 --rc genhtml_legend=1 00:06:44.831 --rc geninfo_all_blocks=1 00:06:44.831 --rc geninfo_unexecuted_blocks=1 00:06:44.831 00:06:44.831 ' 00:06:44.831 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dbf62c29-9d5f-4666-9cee-22902cff7e75 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=dbf62c29-9d5f-4666-9cee-22902cff7e75 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.831 04:53:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.831 04:53:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.831 04:53:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.831 04:53:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.831 04:53:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:44.831 04:53:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.831 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.831 04:53:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.092 INFO: launching applications... 00:06:45.092 04:53:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.092 04:53:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:45.092 04:53:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70140 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:45.092 Waiting for target to run... 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70140 /var/tmp/spdk_tgt.sock 00:06:45.092 04:53:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70140 ']' 00:06:45.092 04:53:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:45.092 04:53:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.092 04:53:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:45.092 04:53:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:45.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:45.092 04:53:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.092 04:53:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:45.092 [2024-11-21 04:53:01.640171] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:45.092 [2024-11-21 04:53:01.640430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70140 ] 00:06:45.353 [2024-11-21 04:53:01.995287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.353 [2024-11-21 04:53:02.009109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.919 04:53:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.919 04:53:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:45.919 00:06:45.919 04:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:45.919 INFO: shutting down applications... 00:06:45.919 04:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70140 ]] 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70140 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70140 00:06:45.919 04:53:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:46.491 04:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:46.491 04:53:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.491 04:53:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70140 00:06:46.491 04:53:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:46.491 04:53:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:46.491 04:53:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:46.491 SPDK target shutdown done 00:06:46.491 Success 00:06:46.491 04:53:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:46.491 04:53:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:46.491 00:06:46.491 real 0m1.527s 00:06:46.491 user 0m1.204s 00:06:46.491 sys 0m0.395s 00:06:46.491 ************************************ 00:06:46.491 END TEST json_config_extra_key 00:06:46.491 ************************************ 00:06:46.491 04:53:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.491 04:53:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:46.491 04:53:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.491 04:53:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.491 04:53:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.491 04:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.491 ************************************ 00:06:46.491 START TEST alias_rpc 00:06:46.491 ************************************ 00:06:46.491 04:53:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:46.491 * Looking for test storage... 00:06:46.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.491 04:53:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.491 --rc genhtml_branch_coverage=1 00:06:46.491 --rc genhtml_function_coverage=1 00:06:46.491 --rc genhtml_legend=1 00:06:46.491 --rc geninfo_all_blocks=1 00:06:46.491 --rc geninfo_unexecuted_blocks=1 00:06:46.491 00:06:46.491 ' 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.491 --rc genhtml_branch_coverage=1 00:06:46.491 --rc genhtml_function_coverage=1 00:06:46.491 --rc genhtml_legend=1 00:06:46.491 --rc geninfo_all_blocks=1 00:06:46.491 --rc geninfo_unexecuted_blocks=1 00:06:46.491 00:06:46.491 ' 00:06:46.491 04:53:03 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.491 --rc genhtml_branch_coverage=1 00:06:46.491 --rc genhtml_function_coverage=1 00:06:46.492 --rc genhtml_legend=1 00:06:46.492 --rc geninfo_all_blocks=1 00:06:46.492 --rc geninfo_unexecuted_blocks=1 00:06:46.492 00:06:46.492 ' 00:06:46.492 04:53:03 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.492 --rc genhtml_branch_coverage=1 00:06:46.492 --rc genhtml_function_coverage=1 00:06:46.492 --rc genhtml_legend=1 00:06:46.492 --rc geninfo_all_blocks=1 00:06:46.492 --rc geninfo_unexecuted_blocks=1 00:06:46.492 00:06:46.492 ' 00:06:46.492 04:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:46.492 04:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70214 00:06:46.492 04:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70214 00:06:46.492 04:53:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70214 ']' 00:06:46.492 04:53:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:46.492 04:53:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.492 04:53:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.492 04:53:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.492 04:53:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.492 04:53:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.492 [2024-11-21 04:53:03.207079] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:46.492 [2024-11-21 04:53:03.207200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70214 ] 00:06:46.752 [2024-11-21 04:53:03.364647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.752 [2024-11-21 04:53:03.394544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.324 04:53:04 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.324 04:53:04 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:47.324 04:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:47.585 04:53:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70214 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70214 ']' 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70214 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70214 00:06:47.585 killing process with pid 70214 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70214' 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 70214 00:06:47.585 04:53:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 70214 00:06:48.155 ************************************ 00:06:48.155 END TEST alias_rpc 00:06:48.155 ************************************ 00:06:48.155 00:06:48.155 real 0m1.615s 00:06:48.155 user 0m1.708s 00:06:48.155 sys 0m0.411s 00:06:48.155 04:53:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.155 04:53:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 04:53:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:48.155 04:53:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:48.155 04:53:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.155 04:53:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.155 04:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:48.155 ************************************ 00:06:48.155 START TEST spdkcli_tcp 00:06:48.155 ************************************ 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:48.155 * Looking for test storage... 00:06:48.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.155 04:53:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.155 --rc genhtml_branch_coverage=1 00:06:48.155 --rc genhtml_function_coverage=1 00:06:48.155 --rc genhtml_legend=1 00:06:48.155 --rc geninfo_all_blocks=1 00:06:48.155 --rc geninfo_unexecuted_blocks=1 00:06:48.155 00:06:48.155 ' 00:06:48.155 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.155 --rc genhtml_branch_coverage=1 00:06:48.155 --rc genhtml_function_coverage=1 00:06:48.156 --rc genhtml_legend=1 00:06:48.156 --rc geninfo_all_blocks=1 00:06:48.156 --rc geninfo_unexecuted_blocks=1 00:06:48.156 00:06:48.156 ' 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.156 --rc genhtml_branch_coverage=1 00:06:48.156 --rc genhtml_function_coverage=1 00:06:48.156 --rc genhtml_legend=1 00:06:48.156 --rc geninfo_all_blocks=1 00:06:48.156 --rc geninfo_unexecuted_blocks=1 00:06:48.156 00:06:48.156 ' 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.156 --rc genhtml_branch_coverage=1 00:06:48.156 --rc genhtml_function_coverage=1 00:06:48.156 --rc genhtml_legend=1 00:06:48.156 --rc geninfo_all_blocks=1 00:06:48.156 --rc geninfo_unexecuted_blocks=1 00:06:48.156 00:06:48.156 ' 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70293 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70293 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70293 ']' 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.156 04:53:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.156 04:53:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.156 [2024-11-21 04:53:04.870542] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:48.156 [2024-11-21 04:53:04.870809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70293 ] 00:06:48.416 [2024-11-21 04:53:05.028476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.416 [2024-11-21 04:53:05.053829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.416 [2024-11-21 04:53:05.053920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.987 04:53:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.987 04:53:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:48.987 04:53:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70305 00:06:48.987 04:53:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:48.987 04:53:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:49.248 [ 00:06:49.248 "bdev_malloc_delete", 00:06:49.248 "bdev_malloc_create", 00:06:49.248 "bdev_null_resize", 00:06:49.248 "bdev_null_delete", 00:06:49.248 "bdev_null_create", 00:06:49.248 "bdev_nvme_cuse_unregister", 00:06:49.248 "bdev_nvme_cuse_register", 00:06:49.248 "bdev_opal_new_user", 00:06:49.248 "bdev_opal_set_lock_state", 00:06:49.248 "bdev_opal_delete", 00:06:49.248 "bdev_opal_get_info", 00:06:49.248 "bdev_opal_create", 00:06:49.248 "bdev_nvme_opal_revert", 00:06:49.248 "bdev_nvme_opal_init", 00:06:49.248 "bdev_nvme_send_cmd", 00:06:49.248 "bdev_nvme_set_keys", 00:06:49.248 "bdev_nvme_get_path_iostat", 00:06:49.248 "bdev_nvme_get_mdns_discovery_info", 00:06:49.248 "bdev_nvme_stop_mdns_discovery", 00:06:49.248 "bdev_nvme_start_mdns_discovery", 00:06:49.248 "bdev_nvme_set_multipath_policy", 00:06:49.248 "bdev_nvme_set_preferred_path", 00:06:49.248 "bdev_nvme_get_io_paths", 00:06:49.248 "bdev_nvme_remove_error_injection", 00:06:49.248 "bdev_nvme_add_error_injection", 00:06:49.248 "bdev_nvme_get_discovery_info", 00:06:49.248 "bdev_nvme_stop_discovery", 00:06:49.248 "bdev_nvme_start_discovery", 00:06:49.248 "bdev_nvme_get_controller_health_info", 00:06:49.248 "bdev_nvme_disable_controller", 00:06:49.248 "bdev_nvme_enable_controller", 00:06:49.248 "bdev_nvme_reset_controller", 00:06:49.248 "bdev_nvme_get_transport_statistics", 00:06:49.248 "bdev_nvme_apply_firmware", 00:06:49.248 "bdev_nvme_detach_controller", 00:06:49.248 "bdev_nvme_get_controllers", 00:06:49.248 "bdev_nvme_attach_controller", 00:06:49.248 "bdev_nvme_set_hotplug", 00:06:49.248 "bdev_nvme_set_options", 00:06:49.248 "bdev_passthru_delete", 00:06:49.248 "bdev_passthru_create", 00:06:49.248 "bdev_lvol_set_parent_bdev", 00:06:49.248 "bdev_lvol_set_parent", 00:06:49.248 "bdev_lvol_check_shallow_copy", 00:06:49.248 "bdev_lvol_start_shallow_copy", 00:06:49.248 "bdev_lvol_grow_lvstore", 00:06:49.248 "bdev_lvol_get_lvols", 00:06:49.248 "bdev_lvol_get_lvstores", 00:06:49.248 "bdev_lvol_delete", 00:06:49.248 "bdev_lvol_set_read_only", 00:06:49.248 "bdev_lvol_resize", 00:06:49.248 "bdev_lvol_decouple_parent", 00:06:49.248 "bdev_lvol_inflate", 00:06:49.248 "bdev_lvol_rename", 00:06:49.248 "bdev_lvol_clone_bdev", 00:06:49.248 "bdev_lvol_clone", 00:06:49.248 "bdev_lvol_snapshot", 00:06:49.248 "bdev_lvol_create", 00:06:49.248 "bdev_lvol_delete_lvstore", 00:06:49.248 "bdev_lvol_rename_lvstore", 00:06:49.248 "bdev_lvol_create_lvstore", 00:06:49.248 "bdev_raid_set_options", 00:06:49.248 "bdev_raid_remove_base_bdev", 00:06:49.248 "bdev_raid_add_base_bdev", 00:06:49.248 "bdev_raid_delete", 00:06:49.248 "bdev_raid_create", 00:06:49.248 "bdev_raid_get_bdevs", 00:06:49.248 "bdev_error_inject_error", 00:06:49.248 "bdev_error_delete", 00:06:49.248 "bdev_error_create", 00:06:49.248 "bdev_split_delete", 00:06:49.248 "bdev_split_create", 00:06:49.248 "bdev_delay_delete", 00:06:49.248 "bdev_delay_create", 00:06:49.248 "bdev_delay_update_latency", 00:06:49.248 "bdev_zone_block_delete", 00:06:49.248 "bdev_zone_block_create", 00:06:49.248 "blobfs_create", 00:06:49.248 "blobfs_detect", 00:06:49.248 "blobfs_set_cache_size", 00:06:49.248 "bdev_xnvme_delete", 00:06:49.248 "bdev_xnvme_create", 00:06:49.248 "bdev_aio_delete", 00:06:49.248 "bdev_aio_rescan", 00:06:49.248 "bdev_aio_create", 00:06:49.248 "bdev_ftl_set_property", 00:06:49.248 "bdev_ftl_get_properties", 00:06:49.248 "bdev_ftl_get_stats", 00:06:49.248 "bdev_ftl_unmap", 00:06:49.248 "bdev_ftl_unload", 00:06:49.248 "bdev_ftl_delete", 00:06:49.248 "bdev_ftl_load", 00:06:49.248 "bdev_ftl_create", 00:06:49.248 "bdev_virtio_attach_controller", 00:06:49.248 "bdev_virtio_scsi_get_devices", 00:06:49.248 "bdev_virtio_detach_controller", 00:06:49.248 "bdev_virtio_blk_set_hotplug", 00:06:49.248 "bdev_iscsi_delete", 00:06:49.248 "bdev_iscsi_create", 00:06:49.248 "bdev_iscsi_set_options", 00:06:49.248 "accel_error_inject_error", 00:06:49.248 "ioat_scan_accel_module", 00:06:49.248 "dsa_scan_accel_module", 00:06:49.248 "iaa_scan_accel_module", 00:06:49.248 "keyring_file_remove_key", 00:06:49.248 "keyring_file_add_key", 00:06:49.248 "keyring_linux_set_options", 00:06:49.248 "fsdev_aio_delete", 00:06:49.248 "fsdev_aio_create", 00:06:49.248 "iscsi_get_histogram", 00:06:49.248 "iscsi_enable_histogram", 00:06:49.248 "iscsi_set_options", 00:06:49.248 "iscsi_get_auth_groups", 00:06:49.248 "iscsi_auth_group_remove_secret", 00:06:49.248 "iscsi_auth_group_add_secret", 00:06:49.248 "iscsi_delete_auth_group", 00:06:49.248 "iscsi_create_auth_group", 00:06:49.248 "iscsi_set_discovery_auth", 00:06:49.248 "iscsi_get_options", 00:06:49.248 "iscsi_target_node_request_logout", 00:06:49.248 "iscsi_target_node_set_redirect", 00:06:49.248 "iscsi_target_node_set_auth", 00:06:49.248 "iscsi_target_node_add_lun", 00:06:49.248 "iscsi_get_stats", 00:06:49.248 "iscsi_get_connections", 00:06:49.248 "iscsi_portal_group_set_auth", 00:06:49.248 "iscsi_start_portal_group", 00:06:49.248 "iscsi_delete_portal_group", 00:06:49.248 "iscsi_create_portal_group", 00:06:49.248 "iscsi_get_portal_groups", 00:06:49.248 "iscsi_delete_target_node", 00:06:49.248 "iscsi_target_node_remove_pg_ig_maps", 00:06:49.248 "iscsi_target_node_add_pg_ig_maps", 00:06:49.248 "iscsi_create_target_node", 00:06:49.248 "iscsi_get_target_nodes", 00:06:49.248 "iscsi_delete_initiator_group", 00:06:49.248 "iscsi_initiator_group_remove_initiators", 00:06:49.248 "iscsi_initiator_group_add_initiators", 00:06:49.248 "iscsi_create_initiator_group", 00:06:49.248 "iscsi_get_initiator_groups", 00:06:49.248 "nvmf_set_crdt", 00:06:49.248 "nvmf_set_config", 00:06:49.248 "nvmf_set_max_subsystems", 00:06:49.248 "nvmf_stop_mdns_prr", 00:06:49.248 "nvmf_publish_mdns_prr", 00:06:49.248 "nvmf_subsystem_get_listeners", 00:06:49.248 "nvmf_subsystem_get_qpairs", 00:06:49.248 "nvmf_subsystem_get_controllers", 00:06:49.248 "nvmf_get_stats", 00:06:49.248 "nvmf_get_transports", 00:06:49.248 "nvmf_create_transport", 00:06:49.248 "nvmf_get_targets", 00:06:49.248 "nvmf_delete_target", 00:06:49.248 "nvmf_create_target", 00:06:49.248 "nvmf_subsystem_allow_any_host", 00:06:49.248 "nvmf_subsystem_set_keys", 00:06:49.248 "nvmf_subsystem_remove_host", 00:06:49.248 "nvmf_subsystem_add_host", 00:06:49.248 "nvmf_ns_remove_host", 00:06:49.248 "nvmf_ns_add_host", 00:06:49.248 "nvmf_subsystem_remove_ns", 00:06:49.248 "nvmf_subsystem_set_ns_ana_group", 00:06:49.248 "nvmf_subsystem_add_ns", 00:06:49.248 "nvmf_subsystem_listener_set_ana_state", 00:06:49.248 "nvmf_discovery_get_referrals", 00:06:49.248 "nvmf_discovery_remove_referral", 00:06:49.248 "nvmf_discovery_add_referral", 00:06:49.248 "nvmf_subsystem_remove_listener", 00:06:49.248 "nvmf_subsystem_add_listener", 00:06:49.248 "nvmf_delete_subsystem", 00:06:49.248 "nvmf_create_subsystem", 00:06:49.248 "nvmf_get_subsystems", 00:06:49.248 "env_dpdk_get_mem_stats", 00:06:49.248 "nbd_get_disks", 00:06:49.248 "nbd_stop_disk", 00:06:49.248 "nbd_start_disk", 00:06:49.248 "ublk_recover_disk", 00:06:49.248 "ublk_get_disks", 00:06:49.248 "ublk_stop_disk", 00:06:49.248 "ublk_start_disk", 00:06:49.248 "ublk_destroy_target", 00:06:49.248 "ublk_create_target", 00:06:49.248 "virtio_blk_create_transport", 00:06:49.248 "virtio_blk_get_transports", 00:06:49.248 "vhost_controller_set_coalescing", 00:06:49.248 "vhost_get_controllers", 00:06:49.248 "vhost_delete_controller", 00:06:49.248 "vhost_create_blk_controller", 00:06:49.248 "vhost_scsi_controller_remove_target", 00:06:49.248 "vhost_scsi_controller_add_target", 00:06:49.248 "vhost_start_scsi_controller", 00:06:49.248 "vhost_create_scsi_controller", 00:06:49.248 "thread_set_cpumask", 00:06:49.248 "scheduler_set_options", 00:06:49.248 "framework_get_governor", 00:06:49.248 "framework_get_scheduler", 00:06:49.248 "framework_set_scheduler", 00:06:49.248 "framework_get_reactors", 00:06:49.248 "thread_get_io_channels", 00:06:49.248 "thread_get_pollers", 00:06:49.248 "thread_get_stats", 00:06:49.248 "framework_monitor_context_switch", 00:06:49.248 "spdk_kill_instance", 00:06:49.249 "log_enable_timestamps", 00:06:49.249 "log_get_flags", 00:06:49.249 "log_clear_flag", 00:06:49.249 "log_set_flag", 00:06:49.249 "log_get_level", 00:06:49.249 "log_set_level", 00:06:49.249 "log_get_print_level", 00:06:49.249 "log_set_print_level", 00:06:49.249 "framework_enable_cpumask_locks", 00:06:49.249 "framework_disable_cpumask_locks", 00:06:49.249 "framework_wait_init", 00:06:49.249 "framework_start_init", 00:06:49.249 "scsi_get_devices", 00:06:49.249 "bdev_get_histogram", 00:06:49.249 "bdev_enable_histogram", 00:06:49.249 "bdev_set_qos_limit", 00:06:49.249 "bdev_set_qd_sampling_period", 00:06:49.249 "bdev_get_bdevs", 00:06:49.249 "bdev_reset_iostat", 00:06:49.249 "bdev_get_iostat", 00:06:49.249 "bdev_examine", 00:06:49.249 "bdev_wait_for_examine", 00:06:49.249 "bdev_set_options", 00:06:49.249 "accel_get_stats", 00:06:49.249 "accel_set_options", 00:06:49.249 "accel_set_driver", 00:06:49.249 "accel_crypto_key_destroy", 00:06:49.249 "accel_crypto_keys_get", 00:06:49.249 "accel_crypto_key_create", 00:06:49.249 "accel_assign_opc", 00:06:49.249 "accel_get_module_info", 00:06:49.249 "accel_get_opc_assignments", 00:06:49.249 "vmd_rescan", 00:06:49.249 "vmd_remove_device", 00:06:49.249 "vmd_enable", 00:06:49.249 "sock_get_default_impl", 00:06:49.249 "sock_set_default_impl", 00:06:49.249 "sock_impl_set_options", 00:06:49.249 "sock_impl_get_options", 00:06:49.249 "iobuf_get_stats", 00:06:49.249 "iobuf_set_options", 00:06:49.249 "keyring_get_keys", 00:06:49.249 "framework_get_pci_devices", 00:06:49.249 "framework_get_config", 00:06:49.249 "framework_get_subsystems", 00:06:49.249 "fsdev_set_opts", 00:06:49.249 "fsdev_get_opts", 00:06:49.249 "trace_get_info", 00:06:49.249 "trace_get_tpoint_group_mask", 00:06:49.249 "trace_disable_tpoint_group", 00:06:49.249 "trace_enable_tpoint_group", 00:06:49.249 "trace_clear_tpoint_mask", 00:06:49.249 "trace_set_tpoint_mask", 00:06:49.249 "notify_get_notifications", 00:06:49.249 "notify_get_types", 00:06:49.249 "spdk_get_version", 00:06:49.249 "rpc_get_methods" 00:06:49.249 ] 00:06:49.249 04:53:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.249 04:53:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:49.249 04:53:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70293 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70293 ']' 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70293 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70293 00:06:49.249 killing process with pid 70293 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70293' 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70293 00:06:49.249 04:53:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70293 00:06:49.822 ************************************ 00:06:49.822 END TEST spdkcli_tcp 00:06:49.822 ************************************ 00:06:49.822 00:06:49.822 real 0m1.590s 00:06:49.822 user 0m2.742s 00:06:49.822 sys 0m0.436s 00:06:49.822 04:53:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.822 04:53:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.822 04:53:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.822 04:53:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.822 04:53:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.822 04:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:49.822 ************************************ 00:06:49.822 START TEST dpdk_mem_utility 00:06:49.822 ************************************ 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:49.822 * Looking for test storage... 00:06:49.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:49.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.822 04:53:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.822 --rc genhtml_branch_coverage=1 00:06:49.822 --rc genhtml_function_coverage=1 00:06:49.822 --rc genhtml_legend=1 00:06:49.822 --rc geninfo_all_blocks=1 00:06:49.822 --rc geninfo_unexecuted_blocks=1 00:06:49.822 00:06:49.822 ' 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.822 --rc genhtml_branch_coverage=1 00:06:49.822 --rc genhtml_function_coverage=1 00:06:49.822 --rc genhtml_legend=1 00:06:49.822 --rc geninfo_all_blocks=1 00:06:49.822 --rc geninfo_unexecuted_blocks=1 00:06:49.822 00:06:49.822 ' 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.822 --rc genhtml_branch_coverage=1 00:06:49.822 --rc genhtml_function_coverage=1 00:06:49.822 --rc genhtml_legend=1 00:06:49.822 --rc geninfo_all_blocks=1 00:06:49.822 --rc geninfo_unexecuted_blocks=1 00:06:49.822 00:06:49.822 ' 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.822 --rc genhtml_branch_coverage=1 00:06:49.822 --rc genhtml_function_coverage=1 00:06:49.822 --rc genhtml_legend=1 00:06:49.822 --rc geninfo_all_blocks=1 00:06:49.822 --rc geninfo_unexecuted_blocks=1 00:06:49.822 00:06:49.822 ' 00:06:49.822 04:53:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:49.822 04:53:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70388 00:06:49.822 04:53:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70388 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70388 ']' 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.822 04:53:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.822 04:53:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.822 [2024-11-21 04:53:06.496623] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:49.822 [2024-11-21 04:53:06.497173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70388 ] 00:06:50.084 [2024-11-21 04:53:06.647049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.084 [2024-11-21 04:53:06.671096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.656 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.656 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:50.656 04:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:50.656 04:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:50.656 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.656 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.656 { 00:06:50.656 "filename": "/tmp/spdk_mem_dump.txt" 00:06:50.656 } 00:06:50.656 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.656 04:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:50.973 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:50.973 1 heaps totaling size 810.000000 MiB 00:06:50.973 size: 810.000000 MiB heap id: 0 00:06:50.973 end heaps---------- 00:06:50.973 9 mempools totaling size 595.772034 MiB 00:06:50.973 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:50.973 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:50.973 size: 92.545471 MiB name: bdev_io_70388 00:06:50.973 size: 50.003479 MiB name: msgpool_70388 00:06:50.973 size: 36.509338 MiB name: fsdev_io_70388 00:06:50.973 size: 21.763794 MiB name: PDU_Pool 00:06:50.973 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:50.973 size: 4.133484 MiB name: evtpool_70388 00:06:50.973 size: 0.026123 MiB name: Session_Pool 00:06:50.973 end mempools------- 00:06:50.973 6 memzones totaling size 4.142822 MiB 00:06:50.973 size: 1.000366 MiB name: RG_ring_0_70388 00:06:50.973 size: 1.000366 MiB name: RG_ring_1_70388 00:06:50.973 size: 1.000366 MiB name: RG_ring_4_70388 00:06:50.973 size: 1.000366 MiB name: RG_ring_5_70388 00:06:50.973 size: 0.125366 MiB name: RG_ring_2_70388 00:06:50.973 size: 0.015991 MiB name: RG_ring_3_70388 00:06:50.973 end memzones------- 00:06:50.973 04:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:50.973 heap id: 0 total size: 810.000000 MiB number of busy elements: 317 number of free elements: 15 00:06:50.973 list of free elements. size: 10.812500 MiB 00:06:50.973 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:50.973 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:50.973 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:50.973 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:50.973 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:50.973 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:50.973 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:50.973 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:50.973 element at address: 0x20001a600000 with size: 0.566956 MiB 00:06:50.973 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:50.973 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:50.973 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:50.973 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:50.973 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:50.973 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:50.973 list of standard malloc elements. size: 199.268616 MiB 00:06:50.973 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:50.973 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:50.973 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:50.973 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:50.973 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:50.973 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:50.973 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:50.973 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:50.973 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:50.973 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:50.973 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:50.973 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:50.974 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691240 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691300 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:50.974 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:50.974 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:50.975 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:50.975 list of memzone associated elements. size: 599.918884 MiB 00:06:50.975 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:50.975 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:50.975 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:50.975 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:50.975 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:50.975 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70388_0 00:06:50.975 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:50.975 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70388_0 00:06:50.975 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:50.975 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70388_0 00:06:50.975 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:50.975 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:50.975 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:50.975 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:50.975 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:50.975 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70388_0 00:06:50.975 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:50.975 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70388 00:06:50.975 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:50.975 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70388 00:06:50.975 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:50.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:50.975 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:50.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:50.975 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:50.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:50.975 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:50.975 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:50.975 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:50.975 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70388 00:06:50.975 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:50.975 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70388 00:06:50.975 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:50.975 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70388 00:06:50.975 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:50.975 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70388 00:06:50.975 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:50.975 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70388 00:06:50.975 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:50.975 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70388 00:06:50.975 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:50.975 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:50.975 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:50.975 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:50.975 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:50.975 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:50.975 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:50.975 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70388 00:06:50.975 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:50.975 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70388 00:06:50.975 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:50.975 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:50.975 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:50.975 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:50.975 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:50.975 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70388 00:06:50.975 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:50.975 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:50.975 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:50.975 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70388 00:06:50.975 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:50.975 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70388 00:06:50.975 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:50.975 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70388 00:06:50.975 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:50.975 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:50.975 04:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:50.975 04:53:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70388 00:06:50.975 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70388 ']' 00:06:50.975 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70388 00:06:50.975 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:50.976 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.976 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70388 00:06:50.976 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.976 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.976 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70388' 00:06:50.976 killing process with pid 70388 00:06:50.976 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70388 00:06:50.976 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70388 00:06:51.236 00:06:51.236 real 0m1.498s 00:06:51.236 user 0m1.485s 00:06:51.236 sys 0m0.420s 00:06:51.236 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.236 ************************************ 00:06:51.236 04:53:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 END TEST dpdk_mem_utility 00:06:51.236 ************************************ 00:06:51.236 04:53:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.236 04:53:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.236 04:53:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.236 04:53:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.236 ************************************ 00:06:51.236 START TEST event 00:06:51.236 ************************************ 00:06:51.236 04:53:07 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.236 * Looking for test storage... 00:06:51.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:51.236 04:53:07 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.236 04:53:07 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.236 04:53:07 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.236 04:53:07 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.236 04:53:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.236 04:53:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.236 04:53:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.236 04:53:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.236 04:53:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.236 04:53:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.236 04:53:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.236 04:53:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.236 04:53:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.236 04:53:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.236 04:53:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.236 04:53:07 event -- scripts/common.sh@344 -- # case "$op" in 00:06:51.236 04:53:07 event -- scripts/common.sh@345 -- # : 1 00:06:51.236 04:53:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.236 04:53:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.236 04:53:07 event -- scripts/common.sh@365 -- # decimal 1 00:06:51.236 04:53:07 event -- scripts/common.sh@353 -- # local d=1 00:06:51.236 04:53:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.236 04:53:07 event -- scripts/common.sh@355 -- # echo 1 00:06:51.496 04:53:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.496 04:53:07 event -- scripts/common.sh@366 -- # decimal 2 00:06:51.496 04:53:07 event -- scripts/common.sh@353 -- # local d=2 00:06:51.496 04:53:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.496 04:53:07 event -- scripts/common.sh@355 -- # echo 2 00:06:51.496 04:53:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.496 04:53:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.496 04:53:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.496 04:53:07 event -- scripts/common.sh@368 -- # return 0 00:06:51.496 04:53:07 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.496 04:53:07 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.496 --rc genhtml_branch_coverage=1 00:06:51.496 --rc genhtml_function_coverage=1 00:06:51.496 --rc genhtml_legend=1 00:06:51.496 --rc geninfo_all_blocks=1 00:06:51.496 --rc geninfo_unexecuted_blocks=1 00:06:51.496 00:06:51.496 ' 00:06:51.496 04:53:07 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.496 --rc genhtml_branch_coverage=1 00:06:51.496 --rc genhtml_function_coverage=1 00:06:51.496 --rc genhtml_legend=1 00:06:51.496 --rc geninfo_all_blocks=1 00:06:51.496 --rc geninfo_unexecuted_blocks=1 00:06:51.496 00:06:51.496 ' 00:06:51.496 04:53:07 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.496 --rc genhtml_branch_coverage=1 00:06:51.496 --rc genhtml_function_coverage=1 00:06:51.496 --rc genhtml_legend=1 00:06:51.496 --rc geninfo_all_blocks=1 00:06:51.496 --rc geninfo_unexecuted_blocks=1 00:06:51.496 00:06:51.496 ' 00:06:51.496 04:53:07 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.496 --rc genhtml_branch_coverage=1 00:06:51.496 --rc genhtml_function_coverage=1 00:06:51.496 --rc genhtml_legend=1 00:06:51.496 --rc geninfo_all_blocks=1 00:06:51.496 --rc geninfo_unexecuted_blocks=1 00:06:51.496 00:06:51.496 ' 00:06:51.496 04:53:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:51.496 04:53:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:51.496 04:53:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.496 04:53:07 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:51.496 04:53:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.496 04:53:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.496 ************************************ 00:06:51.496 START TEST event_perf 00:06:51.496 ************************************ 00:06:51.496 04:53:07 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:51.496 Running I/O for 1 seconds...[2024-11-21 04:53:08.017328] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:51.496 [2024-11-21 04:53:08.017540] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70468 ] 00:06:51.496 [2024-11-21 04:53:08.173763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.496 [2024-11-21 04:53:08.201162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.496 [2024-11-21 04:53:08.201468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.496 [2024-11-21 04:53:08.202079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.496 [2024-11-21 04:53:08.202107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.873 Running I/O for 1 seconds... 00:06:52.873 lcore 0: 202113 00:06:52.873 lcore 1: 202108 00:06:52.873 lcore 2: 202109 00:06:52.873 lcore 3: 202113 00:06:52.873 done. 00:06:52.873 00:06:52.873 real 0m1.267s 00:06:52.873 user 0m4.072s 00:06:52.873 sys 0m0.079s 00:06:52.873 ************************************ 00:06:52.873 END TEST event_perf 00:06:52.873 ************************************ 00:06:52.873 04:53:09 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.873 04:53:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.873 04:53:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:52.873 04:53:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:52.873 04:53:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.873 04:53:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.873 ************************************ 00:06:52.873 START TEST event_reactor 00:06:52.873 ************************************ 00:06:52.873 04:53:09 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:52.873 [2024-11-21 04:53:09.322235] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:52.873 [2024-11-21 04:53:09.322433] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70508 ] 00:06:52.873 [2024-11-21 04:53:09.475645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.873 [2024-11-21 04:53:09.499051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.818 test_start 00:06:53.818 oneshot 00:06:53.818 tick 100 00:06:53.818 tick 100 00:06:53.818 tick 250 00:06:53.818 tick 100 00:06:53.818 tick 100 00:06:53.818 tick 100 00:06:53.818 tick 250 00:06:53.818 tick 500 00:06:53.818 tick 100 00:06:53.818 tick 100 00:06:53.818 tick 250 00:06:53.818 tick 100 00:06:53.818 tick 100 00:06:53.818 test_end 00:06:54.079 00:06:54.079 real 0m1.253s 00:06:54.079 user 0m1.083s 00:06:54.079 sys 0m0.062s 00:06:54.079 04:53:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.079 ************************************ 00:06:54.079 END TEST event_reactor 00:06:54.079 ************************************ 00:06:54.079 04:53:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:54.079 04:53:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.079 04:53:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:54.079 04:53:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.079 04:53:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.079 ************************************ 00:06:54.079 START TEST event_reactor_perf 00:06:54.079 ************************************ 00:06:54.080 04:53:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.080 [2024-11-21 04:53:10.628656] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:54.080 [2024-11-21 04:53:10.628764] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70539 ] 00:06:54.080 [2024-11-21 04:53:10.779407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.080 [2024-11-21 04:53:10.802626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.468 test_start 00:06:55.468 test_end 00:06:55.468 Performance: 315518 events per second 00:06:55.468 00:06:55.468 real 0m1.251s 00:06:55.468 user 0m1.076s 00:06:55.468 sys 0m0.068s 00:06:55.468 04:53:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.468 04:53:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.468 ************************************ 00:06:55.468 END TEST event_reactor_perf 00:06:55.468 ************************************ 00:06:55.468 04:53:11 event -- event/event.sh@49 -- # uname -s 00:06:55.468 04:53:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:55.468 04:53:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.468 04:53:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.468 04:53:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.468 04:53:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.468 ************************************ 00:06:55.468 START TEST event_scheduler 00:06:55.468 ************************************ 00:06:55.468 04:53:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.468 * Looking for test storage... 00:06:55.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:55.468 04:53:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.468 04:53:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.468 04:53:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.468 04:53:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.468 --rc genhtml_branch_coverage=1 00:06:55.468 --rc genhtml_function_coverage=1 00:06:55.468 --rc genhtml_legend=1 00:06:55.468 --rc geninfo_all_blocks=1 00:06:55.468 --rc geninfo_unexecuted_blocks=1 00:06:55.468 00:06:55.468 ' 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.468 --rc genhtml_branch_coverage=1 00:06:55.468 --rc genhtml_function_coverage=1 00:06:55.468 --rc genhtml_legend=1 00:06:55.468 --rc geninfo_all_blocks=1 00:06:55.468 --rc geninfo_unexecuted_blocks=1 00:06:55.468 00:06:55.468 ' 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.468 --rc genhtml_branch_coverage=1 00:06:55.468 --rc genhtml_function_coverage=1 00:06:55.468 --rc genhtml_legend=1 00:06:55.468 --rc geninfo_all_blocks=1 00:06:55.468 --rc geninfo_unexecuted_blocks=1 00:06:55.468 00:06:55.468 ' 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.468 --rc genhtml_branch_coverage=1 00:06:55.468 --rc genhtml_function_coverage=1 00:06:55.468 --rc genhtml_legend=1 00:06:55.468 --rc geninfo_all_blocks=1 00:06:55.468 --rc geninfo_unexecuted_blocks=1 00:06:55.468 00:06:55.468 ' 00:06:55.468 04:53:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:55.468 04:53:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70609 00:06:55.468 04:53:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.468 04:53:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70609 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70609 ']' 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.468 04:53:12 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.468 04:53:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:55.469 04:53:12 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.469 04:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.469 [2024-11-21 04:53:12.107722] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:55.469 [2024-11-21 04:53:12.108016] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70609 ] 00:06:55.730 [2024-11-21 04:53:12.265491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.730 [2024-11-21 04:53:12.294274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.731 [2024-11-21 04:53:12.294577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.731 [2024-11-21 04:53:12.294894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.731 [2024-11-21 04:53:12.294966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.304 04:53:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.304 04:53:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:56.304 04:53:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:56.304 04:53:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.304 04:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.304 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.304 POWER: Cannot set governor of lcore 0 to performance 00:06:56.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.304 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.304 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.304 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:56.304 POWER: Unable to set Power Management Environment for lcore 0 00:06:56.304 [2024-11-21 04:53:12.953174] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:56.304 [2024-11-21 04:53:12.953329] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:56.304 [2024-11-21 04:53:12.953404] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:56.304 [2024-11-21 04:53:12.953570] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:56.304 [2024-11-21 04:53:12.953585] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:56.304 [2024-11-21 04:53:12.953622] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:56.304 04:53:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.304 04:53:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:56.304 04:53:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.304 04:53:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.304 [2024-11-21 04:53:13.026470] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:56.304 04:53:13 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.304 04:53:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:56.304 04:53:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.304 04:53:13 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.304 04:53:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 ************************************ 00:06:56.566 START TEST scheduler_create_thread 00:06:56.566 ************************************ 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 2 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 3 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 4 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 5 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 6 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 7 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 8 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 9 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 10 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.566 04:53:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.953 04:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.953 04:53:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:57.953 04:53:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:57.953 04:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.953 04:53:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.337 04:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.337 ************************************ 00:06:59.337 END TEST scheduler_create_thread 00:06:59.337 ************************************ 00:06:59.337 00:06:59.337 real 0m2.613s 00:06:59.337 user 0m0.016s 00:06:59.337 sys 0m0.005s 00:06:59.337 04:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.337 04:53:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.337 04:53:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:59.337 04:53:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70609 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70609 ']' 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70609 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70609 00:06:59.337 killing process with pid 70609 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70609' 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70609 00:06:59.337 04:53:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70609 00:06:59.597 [2024-11-21 04:53:16.137979] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:59.597 ************************************ 00:06:59.597 END TEST event_scheduler 00:06:59.597 ************************************ 00:06:59.597 00:06:59.597 real 0m4.417s 00:06:59.597 user 0m8.091s 00:06:59.597 sys 0m0.339s 00:06:59.597 04:53:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.597 04:53:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.859 04:53:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:59.859 04:53:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:59.859 04:53:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.859 04:53:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.859 04:53:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.859 ************************************ 00:06:59.859 START TEST app_repeat 00:06:59.859 ************************************ 00:06:59.859 04:53:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:59.859 Process app_repeat pid: 70704 00:06:59.859 spdk_app_start Round 0 00:06:59.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70704 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70704' 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70704 /var/tmp/spdk-nbd.sock 00:06:59.859 04:53:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70704 ']' 00:06:59.859 04:53:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.859 04:53:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:59.859 04:53:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.859 04:53:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.859 04:53:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.859 04:53:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.859 [2024-11-21 04:53:16.414789] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:06:59.859 [2024-11-21 04:53:16.414876] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70704 ] 00:06:59.859 [2024-11-21 04:53:16.560419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.859 [2024-11-21 04:53:16.580720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.859 [2024-11-21 04:53:16.580725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.801 04:53:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.801 04:53:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:00.801 04:53:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.801 Malloc0 00:07:00.801 04:53:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.062 Malloc1 00:07:01.062 04:53:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.062 04:53:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.322 /dev/nbd0 00:07:01.322 04:53:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.322 04:53:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.322 1+0 records in 00:07:01.322 1+0 records out 00:07:01.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168061 s, 24.4 MB/s 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.322 04:53:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.322 04:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.322 04:53:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.322 04:53:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.581 /dev/nbd1 00:07:01.581 04:53:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.581 04:53:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.581 1+0 records in 00:07:01.581 1+0 records out 00:07:01.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196944 s, 20.8 MB/s 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.581 04:53:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.582 04:53:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.582 04:53:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.582 04:53:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.582 04:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.582 04:53:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.582 04:53:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.582 04:53:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.582 04:53:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.841 { 00:07:01.841 "nbd_device": "/dev/nbd0", 00:07:01.841 "bdev_name": "Malloc0" 00:07:01.841 }, 00:07:01.841 { 00:07:01.841 "nbd_device": "/dev/nbd1", 00:07:01.841 "bdev_name": "Malloc1" 00:07:01.841 } 00:07:01.841 ]' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.841 { 00:07:01.841 "nbd_device": "/dev/nbd0", 00:07:01.841 "bdev_name": "Malloc0" 00:07:01.841 }, 00:07:01.841 { 00:07:01.841 "nbd_device": "/dev/nbd1", 00:07:01.841 "bdev_name": "Malloc1" 00:07:01.841 } 00:07:01.841 ]' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.841 /dev/nbd1' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.841 /dev/nbd1' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.841 256+0 records in 00:07:01.841 256+0 records out 00:07:01.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682462 s, 154 MB/s 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.841 256+0 records in 00:07:01.841 256+0 records out 00:07:01.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150509 s, 69.7 MB/s 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.841 256+0 records in 00:07:01.841 256+0 records out 00:07:01.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196446 s, 53.4 MB/s 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.841 04:53:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.100 04:53:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.359 04:53:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.624 04:53:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.625 04:53:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.625 04:53:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.625 04:53:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.625 04:53:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.885 04:53:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.885 [2024-11-21 04:53:19.523036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.885 [2024-11-21 04:53:19.543745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.885 [2024-11-21 04:53:19.543749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.885 [2024-11-21 04:53:19.584393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.885 [2024-11-21 04:53:19.584447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.167 04:53:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.167 04:53:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:06.167 spdk_app_start Round 1 00:07:06.167 04:53:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70704 /var/tmp/spdk-nbd.sock 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70704 ']' 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.167 04:53:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.167 04:53:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.167 Malloc0 00:07:06.167 04:53:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.425 Malloc1 00:07:06.425 04:53:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.425 04:53:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.425 04:53:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.426 04:53:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.685 /dev/nbd0 00:07:06.685 04:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.685 04:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.685 1+0 records in 00:07:06.685 1+0 records out 00:07:06.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178717 s, 22.9 MB/s 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.685 04:53:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.685 04:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.685 04:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.685 04:53:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.943 /dev/nbd1 00:07:06.943 04:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.943 04:53:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.943 04:53:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:06.943 04:53:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:06.943 04:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.943 04:53:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.943 04:53:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:06.943 04:53:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:06.943 04:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.944 04:53:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.944 04:53:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.944 1+0 records in 00:07:06.944 1+0 records out 00:07:06.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200174 s, 20.5 MB/s 00:07:06.944 04:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.944 04:53:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:06.944 04:53:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.944 04:53:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.944 04:53:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:06.944 04:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.944 04:53:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.944 04:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.944 04:53:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.944 04:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.203 { 00:07:07.203 "nbd_device": "/dev/nbd0", 00:07:07.203 "bdev_name": "Malloc0" 00:07:07.203 }, 00:07:07.203 { 00:07:07.203 "nbd_device": "/dev/nbd1", 00:07:07.203 "bdev_name": "Malloc1" 00:07:07.203 } 00:07:07.203 ]' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.203 { 00:07:07.203 "nbd_device": "/dev/nbd0", 00:07:07.203 "bdev_name": "Malloc0" 00:07:07.203 }, 00:07:07.203 { 00:07:07.203 "nbd_device": "/dev/nbd1", 00:07:07.203 "bdev_name": "Malloc1" 00:07:07.203 } 00:07:07.203 ]' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.203 /dev/nbd1' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.203 /dev/nbd1' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.203 256+0 records in 00:07:07.203 256+0 records out 00:07:07.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119729 s, 87.6 MB/s 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.203 256+0 records in 00:07:07.203 256+0 records out 00:07:07.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189423 s, 55.4 MB/s 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.203 256+0 records in 00:07:07.203 256+0 records out 00:07:07.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184469 s, 56.8 MB/s 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.203 04:53:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.462 04:53:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.720 04:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.979 04:53:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.979 04:53:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.237 04:53:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.237 [2024-11-21 04:53:24.837184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.237 [2024-11-21 04:53:24.860993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.237 [2024-11-21 04:53:24.861093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.237 [2024-11-21 04:53:24.906280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.237 [2024-11-21 04:53:24.906352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:11.520 spdk_app_start Round 2 00:07:11.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.520 04:53:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.520 04:53:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:11.520 04:53:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70704 /var/tmp/spdk-nbd.sock 00:07:11.520 04:53:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70704 ']' 00:07:11.521 04:53:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.521 04:53:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.521 04:53:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.521 04:53:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.521 04:53:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.521 04:53:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.521 04:53:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:11.521 04:53:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.521 Malloc0 00:07:11.521 04:53:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.780 Malloc1 00:07:11.780 04:53:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.780 04:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.039 /dev/nbd0 00:07:12.039 04:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.039 04:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.039 1+0 records in 00:07:12.039 1+0 records out 00:07:12.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159997 s, 25.6 MB/s 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.039 04:53:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:12.039 04:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.039 04:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.039 04:53:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.298 /dev/nbd1 00:07:12.298 04:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.298 04:53:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.298 1+0 records in 00:07:12.298 1+0 records out 00:07:12.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207161 s, 19.8 MB/s 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.298 04:53:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:12.298 04:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.298 04:53:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.298 04:53:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.298 04:53:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.298 04:53:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.558 { 00:07:12.558 "nbd_device": "/dev/nbd0", 00:07:12.558 "bdev_name": "Malloc0" 00:07:12.558 }, 00:07:12.558 { 00:07:12.558 "nbd_device": "/dev/nbd1", 00:07:12.558 "bdev_name": "Malloc1" 00:07:12.558 } 00:07:12.558 ]' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.558 { 00:07:12.558 "nbd_device": "/dev/nbd0", 00:07:12.558 "bdev_name": "Malloc0" 00:07:12.558 }, 00:07:12.558 { 00:07:12.558 "nbd_device": "/dev/nbd1", 00:07:12.558 "bdev_name": "Malloc1" 00:07:12.558 } 00:07:12.558 ]' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.558 /dev/nbd1' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.558 /dev/nbd1' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.558 256+0 records in 00:07:12.558 256+0 records out 00:07:12.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522023 s, 201 MB/s 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.558 256+0 records in 00:07:12.558 256+0 records out 00:07:12.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138499 s, 75.7 MB/s 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.558 256+0 records in 00:07:12.558 256+0 records out 00:07:12.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167948 s, 62.4 MB/s 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.558 04:53:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.817 04:53:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.076 04:53:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.076 04:53:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.334 04:53:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:13.592 [2024-11-21 04:53:30.129093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.592 [2024-11-21 04:53:30.150143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.592 [2024-11-21 04:53:30.150144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.592 [2024-11-21 04:53:30.192303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.592 [2024-11-21 04:53:30.192359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.878 04:53:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70704 /var/tmp/spdk-nbd.sock 00:07:16.878 04:53:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70704 ']' 00:07:16.878 04:53:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.878 04:53:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.878 04:53:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:16.879 04:53:33 event.app_repeat -- event/event.sh@39 -- # killprocess 70704 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70704 ']' 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70704 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70704 00:07:16.879 killing process with pid 70704 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70704' 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70704 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70704 00:07:16.879 spdk_app_start is called in Round 0. 00:07:16.879 Shutdown signal received, stop current app iteration 00:07:16.879 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 reinitialization... 00:07:16.879 spdk_app_start is called in Round 1. 00:07:16.879 Shutdown signal received, stop current app iteration 00:07:16.879 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 reinitialization... 00:07:16.879 spdk_app_start is called in Round 2. 00:07:16.879 Shutdown signal received, stop current app iteration 00:07:16.879 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 reinitialization... 00:07:16.879 spdk_app_start is called in Round 3. 00:07:16.879 Shutdown signal received, stop current app iteration 00:07:16.879 ************************************ 00:07:16.879 END TEST app_repeat 00:07:16.879 ************************************ 00:07:16.879 04:53:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:16.879 04:53:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:16.879 00:07:16.879 real 0m17.022s 00:07:16.879 user 0m38.000s 00:07:16.879 sys 0m2.177s 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.879 04:53:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 04:53:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:16.879 04:53:33 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:16.879 04:53:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.879 04:53:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.879 04:53:33 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 ************************************ 00:07:16.879 START TEST cpu_locks 00:07:16.879 ************************************ 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:16.879 * Looking for test storage... 00:07:16.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.879 04:53:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.879 --rc genhtml_branch_coverage=1 00:07:16.879 --rc genhtml_function_coverage=1 00:07:16.879 --rc genhtml_legend=1 00:07:16.879 --rc geninfo_all_blocks=1 00:07:16.879 --rc geninfo_unexecuted_blocks=1 00:07:16.879 00:07:16.879 ' 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.879 --rc genhtml_branch_coverage=1 00:07:16.879 --rc genhtml_function_coverage=1 00:07:16.879 --rc genhtml_legend=1 00:07:16.879 --rc geninfo_all_blocks=1 00:07:16.879 --rc geninfo_unexecuted_blocks=1 00:07:16.879 00:07:16.879 ' 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.879 --rc genhtml_branch_coverage=1 00:07:16.879 --rc genhtml_function_coverage=1 00:07:16.879 --rc genhtml_legend=1 00:07:16.879 --rc geninfo_all_blocks=1 00:07:16.879 --rc geninfo_unexecuted_blocks=1 00:07:16.879 00:07:16.879 ' 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.879 --rc genhtml_branch_coverage=1 00:07:16.879 --rc genhtml_function_coverage=1 00:07:16.879 --rc genhtml_legend=1 00:07:16.879 --rc geninfo_all_blocks=1 00:07:16.879 --rc geninfo_unexecuted_blocks=1 00:07:16.879 00:07:16.879 ' 00:07:16.879 04:53:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:16.879 04:53:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:16.879 04:53:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:16.879 04:53:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.879 04:53:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 ************************************ 00:07:16.879 START TEST default_locks 00:07:16.879 ************************************ 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71124 00:07:16.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71124 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71124 ']' 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.879 04:53:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.140 [2024-11-21 04:53:33.678769] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:17.140 [2024-11-21 04:53:33.678897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71124 ] 00:07:17.140 [2024-11-21 04:53:33.832575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.140 [2024-11-21 04:53:33.856838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71124 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71124 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71124 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71124 ']' 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71124 00:07:18.136 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:18.137 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.137 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71124 00:07:18.137 killing process with pid 71124 00:07:18.137 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.137 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.137 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71124' 00:07:18.137 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71124 00:07:18.137 04:53:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71124 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71124 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71124 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:18.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71124 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71124 ']' 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.399 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71124) - No such process 00:07:18.399 ERROR: process (pid: 71124) is no longer running 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.399 ************************************ 00:07:18.399 END TEST default_locks 00:07:18.399 ************************************ 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:18.399 00:07:18.399 real 0m1.426s 00:07:18.399 user 0m1.411s 00:07:18.399 sys 0m0.451s 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.399 04:53:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.399 04:53:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:18.399 04:53:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.399 04:53:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.399 04:53:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.399 ************************************ 00:07:18.399 START TEST default_locks_via_rpc 00:07:18.399 ************************************ 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71177 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71177 00:07:18.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71177 ']' 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.399 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.660 [2024-11-21 04:53:35.152414] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:18.660 [2024-11-21 04:53:35.152545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71177 ] 00:07:18.660 [2024-11-21 04:53:35.308503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.660 [2024-11-21 04:53:35.333852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.596 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.597 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.597 04:53:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.597 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71177 00:07:19.597 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.597 04:53:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71177 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71177 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71177 ']' 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71177 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71177 00:07:19.597 killing process with pid 71177 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71177' 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71177 00:07:19.597 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71177 00:07:19.858 00:07:19.858 real 0m1.494s 00:07:19.858 user 0m1.492s 00:07:19.858 sys 0m0.466s 00:07:19.858 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.858 ************************************ 00:07:19.858 END TEST default_locks_via_rpc 00:07:19.858 ************************************ 00:07:19.858 04:53:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.119 04:53:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:20.119 04:53:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.119 04:53:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.119 04:53:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.119 ************************************ 00:07:20.119 START TEST non_locking_app_on_locked_coremask 00:07:20.119 ************************************ 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71218 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71218 /var/tmp/spdk.sock 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71218 ']' 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.119 04:53:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.119 [2024-11-21 04:53:36.696619] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:20.119 [2024-11-21 04:53:36.696902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71218 ] 00:07:20.379 [2024-11-21 04:53:36.855919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.379 [2024-11-21 04:53:36.881244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71234 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71234 /var/tmp/spdk2.sock 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71234 ']' 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.950 04:53:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.950 [2024-11-21 04:53:37.595835] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:20.950 [2024-11-21 04:53:37.596161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71234 ] 00:07:21.211 [2024-11-21 04:53:37.769818] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.211 [2024-11-21 04:53:37.769866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.211 [2024-11-21 04:53:37.823204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.864 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.864 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.864 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71218 00:07:21.864 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71218 00:07:21.864 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71218 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71218 ']' 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71218 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71218 00:07:22.124 killing process with pid 71218 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71218' 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71218 00:07:22.124 04:53:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71218 00:07:22.688 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71234 00:07:22.688 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71234 ']' 00:07:22.688 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71234 00:07:22.688 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.688 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.689 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71234 00:07:22.689 killing process with pid 71234 00:07:22.689 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.689 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.689 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71234' 00:07:22.689 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71234 00:07:22.689 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71234 00:07:22.949 ************************************ 00:07:22.949 END TEST non_locking_app_on_locked_coremask 00:07:22.949 ************************************ 00:07:22.949 00:07:22.949 real 0m3.047s 00:07:22.949 user 0m3.248s 00:07:22.949 sys 0m0.859s 00:07:22.949 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.949 04:53:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.210 04:53:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:23.211 04:53:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.211 04:53:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.211 04:53:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.211 ************************************ 00:07:23.211 START TEST locking_app_on_unlocked_coremask 00:07:23.211 ************************************ 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71292 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71292 /var/tmp/spdk.sock 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71292 ']' 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.211 04:53:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.211 [2024-11-21 04:53:39.805961] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:23.211 [2024-11-21 04:53:39.806094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71292 ] 00:07:23.471 [2024-11-21 04:53:39.959186] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.471 [2024-11-21 04:53:39.959238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.471 [2024-11-21 04:53:39.995990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71308 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71308 /var/tmp/spdk2.sock 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71308 ']' 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.044 04:53:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.044 [2024-11-21 04:53:40.749536] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:24.044 [2024-11-21 04:53:40.749719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71308 ] 00:07:24.305 [2024-11-21 04:53:40.933928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.305 [2024-11-21 04:53:41.014859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71308 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71308 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71292 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71292 ']' 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71292 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71292 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71292' 00:07:25.245 killing process with pid 71292 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71292 00:07:25.245 04:53:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71292 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71308 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71308 ']' 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71308 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71308 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.180 killing process with pid 71308 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71308' 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71308 00:07:26.180 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71308 00:07:26.440 00:07:26.440 real 0m3.252s 00:07:26.440 user 0m3.280s 00:07:26.440 sys 0m1.132s 00:07:26.440 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.440 ************************************ 00:07:26.440 END TEST locking_app_on_unlocked_coremask 00:07:26.440 ************************************ 00:07:26.440 04:53:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 04:53:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:26.440 04:53:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.440 04:53:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.440 04:53:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.440 ************************************ 00:07:26.440 START TEST locking_app_on_locked_coremask 00:07:26.440 ************************************ 00:07:26.440 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:26.440 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71377 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71377 /var/tmp/spdk.sock 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71377 ']' 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.441 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.441 [2024-11-21 04:53:43.089423] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:26.441 [2024-11-21 04:53:43.089529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71377 ] 00:07:26.700 [2024-11-21 04:53:43.238224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.701 [2024-11-21 04:53:43.263121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71382 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71382 /var/tmp/spdk2.sock 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71382 /var/tmp/spdk2.sock 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:27.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71382 /var/tmp/spdk2.sock 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71382 ']' 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.267 04:53:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.525 [2024-11-21 04:53:44.018184] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:27.525 [2024-11-21 04:53:44.018300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71382 ] 00:07:27.525 [2024-11-21 04:53:44.179409] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71377 has claimed it. 00:07:27.525 [2024-11-21 04:53:44.179465] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:28.092 ERROR: process (pid: 71382) is no longer running 00:07:28.092 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71382) - No such process 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71377 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71377 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71377 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71377 ']' 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71377 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71377 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.092 killing process with pid 71377 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71377' 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71377 00:07:28.092 04:53:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71377 00:07:28.351 00:07:28.351 real 0m2.043s 00:07:28.351 user 0m2.277s 00:07:28.351 sys 0m0.490s 00:07:28.351 04:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.351 04:53:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.351 ************************************ 00:07:28.351 END TEST locking_app_on_locked_coremask 00:07:28.351 ************************************ 00:07:28.609 04:53:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:28.609 04:53:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.610 04:53:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.610 04:53:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.610 ************************************ 00:07:28.610 START TEST locking_overlapped_coremask 00:07:28.610 ************************************ 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71435 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71435 /var/tmp/spdk.sock 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71435 ']' 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.610 04:53:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.610 [2024-11-21 04:53:45.173278] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:28.610 [2024-11-21 04:53:45.173390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71435 ] 00:07:28.610 [2024-11-21 04:53:45.322750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.868 [2024-11-21 04:53:45.347408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.868 [2024-11-21 04:53:45.347513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.868 [2024-11-21 04:53:45.348104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71442 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71442 /var/tmp/spdk2.sock 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71442 /var/tmp/spdk2.sock 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71442 /var/tmp/spdk2.sock 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71442 ']' 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.435 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.435 [2024-11-21 04:53:46.106881] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:29.435 [2024-11-21 04:53:46.107004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71442 ] 00:07:29.694 [2024-11-21 04:53:46.278790] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71435 has claimed it. 00:07:29.694 [2024-11-21 04:53:46.278856] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:30.261 ERROR: process (pid: 71442) is no longer running 00:07:30.261 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71442) - No such process 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71435 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71435 ']' 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71435 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71435 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.261 killing process with pid 71435 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71435' 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71435 00:07:30.261 04:53:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71435 00:07:30.519 00:07:30.519 real 0m1.926s 00:07:30.519 user 0m5.377s 00:07:30.519 sys 0m0.396s 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.519 ************************************ 00:07:30.519 END TEST locking_overlapped_coremask 00:07:30.519 ************************************ 00:07:30.519 04:53:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:30.519 04:53:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.519 04:53:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.519 04:53:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.519 ************************************ 00:07:30.519 START TEST locking_overlapped_coremask_via_rpc 00:07:30.519 ************************************ 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71490 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71490 /var/tmp/spdk.sock 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71490 ']' 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.519 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.519 [2024-11-21 04:53:47.149779] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:30.519 [2024-11-21 04:53:47.149908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71490 ] 00:07:30.777 [2024-11-21 04:53:47.301735] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.777 [2024-11-21 04:53:47.301784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.777 [2024-11-21 04:53:47.328117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.777 [2024-11-21 04:53:47.328229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.777 [2024-11-21 04:53:47.328310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71502 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71502 /var/tmp/spdk2.sock 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71502 ']' 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.347 04:53:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.347 [2024-11-21 04:53:48.067364] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:31.347 [2024-11-21 04:53:48.067546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71502 ] 00:07:31.608 [2024-11-21 04:53:48.249984] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.608 [2024-11-21 04:53:48.250062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.869 [2024-11-21 04:53:48.338378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.869 [2024-11-21 04:53:48.338501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.869 [2024-11-21 04:53:48.338591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.436 [2024-11-21 04:53:48.952773] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71490 has claimed it. 00:07:32.436 request: 00:07:32.436 { 00:07:32.436 "method": "framework_enable_cpumask_locks", 00:07:32.436 "req_id": 1 00:07:32.436 } 00:07:32.436 Got JSON-RPC error response 00:07:32.436 response: 00:07:32.436 { 00:07:32.436 "code": -32603, 00:07:32.436 "message": "Failed to claim CPU core: 2" 00:07:32.436 } 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71490 /var/tmp/spdk.sock 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71490 ']' 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.436 04:53:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71502 /var/tmp/spdk2.sock 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71502 ']' 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.693 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.694 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:32.694 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:32.694 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:32.694 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:32.694 00:07:32.694 real 0m2.309s 00:07:32.694 user 0m1.089s 00:07:32.694 sys 0m0.150s 00:07:32.694 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.694 04:53:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.694 ************************************ 00:07:32.694 END TEST locking_overlapped_coremask_via_rpc 00:07:32.694 ************************************ 00:07:32.694 04:53:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:32.694 04:53:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71490 ]] 00:07:32.694 04:53:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71490 00:07:32.694 04:53:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71490 ']' 00:07:32.694 04:53:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71490 00:07:32.694 04:53:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:32.694 04:53:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.694 04:53:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71490 00:07:32.952 04:53:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.952 04:53:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.952 04:53:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71490' 00:07:32.952 killing process with pid 71490 00:07:32.952 04:53:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71490 00:07:32.952 04:53:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71490 00:07:33.211 04:53:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71502 ]] 00:07:33.211 04:53:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71502 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71502 ']' 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71502 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71502 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71502' 00:07:33.211 killing process with pid 71502 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71502 00:07:33.211 04:53:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71502 00:07:33.470 04:53:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.470 04:53:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:33.470 04:53:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71490 ]] 00:07:33.470 04:53:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71490 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71490 ']' 00:07:33.470 Process with pid 71490 is not found 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71490 00:07:33.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71490) - No such process 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71490 is not found' 00:07:33.470 04:53:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71502 ]] 00:07:33.470 04:53:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71502 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71502 ']' 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71502 00:07:33.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71502) - No such process 00:07:33.470 Process with pid 71502 is not found 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71502 is not found' 00:07:33.470 04:53:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:33.470 00:07:33.470 real 0m16.619s 00:07:33.470 user 0m28.655s 00:07:33.470 sys 0m4.871s 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.470 ************************************ 00:07:33.470 END TEST cpu_locks 00:07:33.470 04:53:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.470 ************************************ 00:07:33.470 00:07:33.470 real 0m42.276s 00:07:33.470 user 1m21.147s 00:07:33.470 sys 0m7.825s 00:07:33.470 04:53:50 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.470 ************************************ 00:07:33.470 END TEST event 00:07:33.470 ************************************ 00:07:33.471 04:53:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.471 04:53:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:33.471 04:53:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.471 04:53:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.471 04:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:33.471 ************************************ 00:07:33.471 START TEST thread 00:07:33.471 ************************************ 00:07:33.471 04:53:50 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:33.729 * Looking for test storage... 00:07:33.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:33.729 04:53:50 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.729 04:53:50 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.729 04:53:50 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.729 04:53:50 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.729 04:53:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.729 04:53:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.729 04:53:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.730 04:53:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.730 04:53:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.730 04:53:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.730 04:53:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.730 04:53:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.730 04:53:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.730 04:53:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.730 04:53:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.730 04:53:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:33.730 04:53:50 thread -- scripts/common.sh@345 -- # : 1 00:07:33.730 04:53:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.730 04:53:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.730 04:53:50 thread -- scripts/common.sh@365 -- # decimal 1 00:07:33.730 04:53:50 thread -- scripts/common.sh@353 -- # local d=1 00:07:33.730 04:53:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.730 04:53:50 thread -- scripts/common.sh@355 -- # echo 1 00:07:33.730 04:53:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.730 04:53:50 thread -- scripts/common.sh@366 -- # decimal 2 00:07:33.730 04:53:50 thread -- scripts/common.sh@353 -- # local d=2 00:07:33.730 04:53:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.730 04:53:50 thread -- scripts/common.sh@355 -- # echo 2 00:07:33.730 04:53:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.730 04:53:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.730 04:53:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.730 04:53:50 thread -- scripts/common.sh@368 -- # return 0 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.730 --rc genhtml_branch_coverage=1 00:07:33.730 --rc genhtml_function_coverage=1 00:07:33.730 --rc genhtml_legend=1 00:07:33.730 --rc geninfo_all_blocks=1 00:07:33.730 --rc geninfo_unexecuted_blocks=1 00:07:33.730 00:07:33.730 ' 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.730 --rc genhtml_branch_coverage=1 00:07:33.730 --rc genhtml_function_coverage=1 00:07:33.730 --rc genhtml_legend=1 00:07:33.730 --rc geninfo_all_blocks=1 00:07:33.730 --rc geninfo_unexecuted_blocks=1 00:07:33.730 00:07:33.730 ' 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.730 --rc genhtml_branch_coverage=1 00:07:33.730 --rc genhtml_function_coverage=1 00:07:33.730 --rc genhtml_legend=1 00:07:33.730 --rc geninfo_all_blocks=1 00:07:33.730 --rc geninfo_unexecuted_blocks=1 00:07:33.730 00:07:33.730 ' 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.730 --rc genhtml_branch_coverage=1 00:07:33.730 --rc genhtml_function_coverage=1 00:07:33.730 --rc genhtml_legend=1 00:07:33.730 --rc geninfo_all_blocks=1 00:07:33.730 --rc geninfo_unexecuted_blocks=1 00:07:33.730 00:07:33.730 ' 00:07:33.730 04:53:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.730 04:53:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.730 ************************************ 00:07:33.730 START TEST thread_poller_perf 00:07:33.730 ************************************ 00:07:33.730 04:53:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:33.730 [2024-11-21 04:53:50.308769] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:33.730 [2024-11-21 04:53:50.308869] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71640 ] 00:07:33.730 [2024-11-21 04:53:50.458960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.988 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:33.988 [2024-11-21 04:53:50.481855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.999 [2024-11-21T04:53:51.733Z] ====================================== 00:07:34.999 [2024-11-21T04:53:51.733Z] busy:2612175894 (cyc) 00:07:34.999 [2024-11-21T04:53:51.733Z] total_run_count: 411000 00:07:34.999 [2024-11-21T04:53:51.733Z] tsc_hz: 2600000000 (cyc) 00:07:34.999 [2024-11-21T04:53:51.733Z] ====================================== 00:07:34.999 [2024-11-21T04:53:51.733Z] poller_cost: 6355 (cyc), 2444 (nsec) 00:07:34.999 00:07:34.999 real 0m1.247s 00:07:34.999 user 0m1.086s 00:07:34.999 sys 0m0.057s 00:07:34.999 04:53:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.999 04:53:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.999 ************************************ 00:07:34.999 END TEST thread_poller_perf 00:07:34.999 ************************************ 00:07:34.999 04:53:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.999 04:53:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:34.999 04:53:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.999 04:53:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.999 ************************************ 00:07:34.999 START TEST thread_poller_perf 00:07:34.999 ************************************ 00:07:34.999 04:53:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:34.999 [2024-11-21 04:53:51.613783] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:34.999 [2024-11-21 04:53:51.613891] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71671 ] 00:07:35.259 [2024-11-21 04:53:51.766294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.259 [2024-11-21 04:53:51.791724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.259 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:36.194 [2024-11-21T04:53:52.928Z] ====================================== 00:07:36.194 [2024-11-21T04:53:52.928Z] busy:2602896470 (cyc) 00:07:36.194 [2024-11-21T04:53:52.928Z] total_run_count: 5357000 00:07:36.194 [2024-11-21T04:53:52.928Z] tsc_hz: 2600000000 (cyc) 00:07:36.194 [2024-11-21T04:53:52.928Z] ====================================== 00:07:36.194 [2024-11-21T04:53:52.928Z] poller_cost: 485 (cyc), 186 (nsec) 00:07:36.194 00:07:36.194 real 0m1.250s 00:07:36.194 user 0m1.081s 00:07:36.194 sys 0m0.064s 00:07:36.194 04:53:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.194 04:53:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.194 ************************************ 00:07:36.194 END TEST thread_poller_perf 00:07:36.194 ************************************ 00:07:36.194 04:53:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:36.194 00:07:36.194 real 0m2.732s 00:07:36.194 user 0m2.264s 00:07:36.194 sys 0m0.248s 00:07:36.194 04:53:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.194 04:53:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.194 ************************************ 00:07:36.194 END TEST thread 00:07:36.194 ************************************ 00:07:36.194 04:53:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:36.194 04:53:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:36.194 04:53:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.194 04:53:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.194 04:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:36.454 ************************************ 00:07:36.454 START TEST app_cmdline 00:07:36.454 ************************************ 00:07:36.454 04:53:52 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:36.454 * Looking for test storage... 00:07:36.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.454 04:53:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.454 --rc genhtml_branch_coverage=1 00:07:36.454 --rc genhtml_function_coverage=1 00:07:36.454 --rc genhtml_legend=1 00:07:36.454 --rc geninfo_all_blocks=1 00:07:36.454 --rc geninfo_unexecuted_blocks=1 00:07:36.454 00:07:36.454 ' 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.454 --rc genhtml_branch_coverage=1 00:07:36.454 --rc genhtml_function_coverage=1 00:07:36.454 --rc genhtml_legend=1 00:07:36.454 --rc geninfo_all_blocks=1 00:07:36.454 --rc geninfo_unexecuted_blocks=1 00:07:36.454 00:07:36.454 ' 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.454 --rc genhtml_branch_coverage=1 00:07:36.454 --rc genhtml_function_coverage=1 00:07:36.454 --rc genhtml_legend=1 00:07:36.454 --rc geninfo_all_blocks=1 00:07:36.454 --rc geninfo_unexecuted_blocks=1 00:07:36.454 00:07:36.454 ' 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.454 --rc genhtml_branch_coverage=1 00:07:36.454 --rc genhtml_function_coverage=1 00:07:36.454 --rc genhtml_legend=1 00:07:36.454 --rc geninfo_all_blocks=1 00:07:36.454 --rc geninfo_unexecuted_blocks=1 00:07:36.454 00:07:36.454 ' 00:07:36.454 04:53:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:36.454 04:53:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71749 00:07:36.454 04:53:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71749 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71749 ']' 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.454 04:53:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.454 04:53:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.454 [2024-11-21 04:53:53.154942] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:36.454 [2024-11-21 04:53:53.155098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71749 ] 00:07:36.715 [2024-11-21 04:53:53.317141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.715 [2024-11-21 04:53:53.358265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.288 04:53:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.288 04:53:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:37.288 04:53:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:37.550 { 00:07:37.550 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:07:37.550 "fields": { 00:07:37.550 "major": 25, 00:07:37.550 "minor": 1, 00:07:37.550 "patch": 0, 00:07:37.550 "suffix": "-pre", 00:07:37.550 "commit": "557f022f6" 00:07:37.550 } 00:07:37.550 } 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:37.550 04:53:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:37.550 04:53:54 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.811 request: 00:07:37.811 { 00:07:37.811 "method": "env_dpdk_get_mem_stats", 00:07:37.811 "req_id": 1 00:07:37.811 } 00:07:37.811 Got JSON-RPC error response 00:07:37.811 response: 00:07:37.811 { 00:07:37.811 "code": -32601, 00:07:37.811 "message": "Method not found" 00:07:37.811 } 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.811 04:53:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71749 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71749 ']' 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71749 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71749 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71749' 00:07:37.811 killing process with pid 71749 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@973 -- # kill 71749 00:07:37.811 04:53:54 app_cmdline -- common/autotest_common.sh@978 -- # wait 71749 00:07:38.383 00:07:38.383 real 0m1.998s 00:07:38.383 user 0m2.170s 00:07:38.383 sys 0m0.569s 00:07:38.383 ************************************ 00:07:38.383 END TEST app_cmdline 00:07:38.383 ************************************ 00:07:38.383 04:53:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.383 04:53:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.383 04:53:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.383 04:53:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.383 04:53:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.383 04:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:38.383 ************************************ 00:07:38.383 START TEST version 00:07:38.383 ************************************ 00:07:38.383 04:53:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.383 * Looking for test storage... 00:07:38.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.384 04:53:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.384 04:53:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.384 04:53:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.384 04:53:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.384 04:53:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.384 04:53:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.384 04:53:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.384 04:53:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.384 04:53:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.384 04:53:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.384 04:53:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.384 04:53:55 version -- scripts/common.sh@344 -- # case "$op" in 00:07:38.384 04:53:55 version -- scripts/common.sh@345 -- # : 1 00:07:38.384 04:53:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.384 04:53:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.384 04:53:55 version -- scripts/common.sh@365 -- # decimal 1 00:07:38.384 04:53:55 version -- scripts/common.sh@353 -- # local d=1 00:07:38.384 04:53:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.384 04:53:55 version -- scripts/common.sh@355 -- # echo 1 00:07:38.384 04:53:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.384 04:53:55 version -- scripts/common.sh@366 -- # decimal 2 00:07:38.384 04:53:55 version -- scripts/common.sh@353 -- # local d=2 00:07:38.384 04:53:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.384 04:53:55 version -- scripts/common.sh@355 -- # echo 2 00:07:38.384 04:53:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.384 04:53:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.384 04:53:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.384 04:53:55 version -- scripts/common.sh@368 -- # return 0 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.384 --rc genhtml_branch_coverage=1 00:07:38.384 --rc genhtml_function_coverage=1 00:07:38.384 --rc genhtml_legend=1 00:07:38.384 --rc geninfo_all_blocks=1 00:07:38.384 --rc geninfo_unexecuted_blocks=1 00:07:38.384 00:07:38.384 ' 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.384 --rc genhtml_branch_coverage=1 00:07:38.384 --rc genhtml_function_coverage=1 00:07:38.384 --rc genhtml_legend=1 00:07:38.384 --rc geninfo_all_blocks=1 00:07:38.384 --rc geninfo_unexecuted_blocks=1 00:07:38.384 00:07:38.384 ' 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.384 --rc genhtml_branch_coverage=1 00:07:38.384 --rc genhtml_function_coverage=1 00:07:38.384 --rc genhtml_legend=1 00:07:38.384 --rc geninfo_all_blocks=1 00:07:38.384 --rc geninfo_unexecuted_blocks=1 00:07:38.384 00:07:38.384 ' 00:07:38.384 04:53:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.384 --rc genhtml_branch_coverage=1 00:07:38.384 --rc genhtml_function_coverage=1 00:07:38.384 --rc genhtml_legend=1 00:07:38.384 --rc geninfo_all_blocks=1 00:07:38.384 --rc geninfo_unexecuted_blocks=1 00:07:38.384 00:07:38.384 ' 00:07:38.384 04:53:55 version -- app/version.sh@17 -- # get_header_version major 00:07:38.384 04:53:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.384 04:53:55 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.384 04:53:55 version -- app/version.sh@14 -- # cut -f2 00:07:38.384 04:53:55 version -- app/version.sh@17 -- # major=25 00:07:38.384 04:53:55 version -- app/version.sh@18 -- # get_header_version minor 00:07:38.384 04:53:55 version -- app/version.sh@14 -- # cut -f2 00:07:38.384 04:53:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.384 04:53:55 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.384 04:53:55 version -- app/version.sh@18 -- # minor=1 00:07:38.646 04:53:55 version -- app/version.sh@19 -- # get_header_version patch 00:07:38.646 04:53:55 version -- app/version.sh@14 -- # cut -f2 00:07:38.646 04:53:55 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.646 04:53:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.646 04:53:55 version -- app/version.sh@19 -- # patch=0 00:07:38.646 04:53:55 version -- app/version.sh@20 -- # get_header_version suffix 00:07:38.646 04:53:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:38.646 04:53:55 version -- app/version.sh@14 -- # cut -f2 00:07:38.646 04:53:55 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.646 04:53:55 version -- app/version.sh@20 -- # suffix=-pre 00:07:38.646 04:53:55 version -- app/version.sh@22 -- # version=25.1 00:07:38.646 04:53:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.646 04:53:55 version -- app/version.sh@28 -- # version=25.1rc0 00:07:38.646 04:53:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:38.646 04:53:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.646 04:53:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:38.646 04:53:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:38.646 00:07:38.646 real 0m0.184s 00:07:38.646 user 0m0.108s 00:07:38.646 sys 0m0.102s 00:07:38.646 04:53:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.646 04:53:55 version -- common/autotest_common.sh@10 -- # set +x 00:07:38.646 ************************************ 00:07:38.646 END TEST version 00:07:38.646 ************************************ 00:07:38.646 04:53:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:38.647 04:53:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:38.647 04:53:55 -- spdk/autotest.sh@194 -- # uname -s 00:07:38.647 04:53:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:38.647 04:53:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:38.647 04:53:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:38.647 04:53:55 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:38.647 04:53:55 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:38.647 04:53:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.647 04:53:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.647 04:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:38.647 ************************************ 00:07:38.647 START TEST blockdev_nvme 00:07:38.647 ************************************ 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:38.647 * Looking for test storage... 00:07:38.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.647 04:53:55 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.647 --rc genhtml_branch_coverage=1 00:07:38.647 --rc genhtml_function_coverage=1 00:07:38.647 --rc genhtml_legend=1 00:07:38.647 --rc geninfo_all_blocks=1 00:07:38.647 --rc geninfo_unexecuted_blocks=1 00:07:38.647 00:07:38.647 ' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.647 --rc genhtml_branch_coverage=1 00:07:38.647 --rc genhtml_function_coverage=1 00:07:38.647 --rc genhtml_legend=1 00:07:38.647 --rc geninfo_all_blocks=1 00:07:38.647 --rc geninfo_unexecuted_blocks=1 00:07:38.647 00:07:38.647 ' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.647 --rc genhtml_branch_coverage=1 00:07:38.647 --rc genhtml_function_coverage=1 00:07:38.647 --rc genhtml_legend=1 00:07:38.647 --rc geninfo_all_blocks=1 00:07:38.647 --rc geninfo_unexecuted_blocks=1 00:07:38.647 00:07:38.647 ' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.647 --rc genhtml_branch_coverage=1 00:07:38.647 --rc genhtml_function_coverage=1 00:07:38.647 --rc genhtml_legend=1 00:07:38.647 --rc geninfo_all_blocks=1 00:07:38.647 --rc geninfo_unexecuted_blocks=1 00:07:38.647 00:07:38.647 ' 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:38.647 04:53:55 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71916 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71916 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71916 ']' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.647 04:53:55 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.647 04:53:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.907 [2024-11-21 04:53:55.453732] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:38.907 [2024-11-21 04:53:55.453901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71916 ] 00:07:38.907 [2024-11-21 04:53:55.612853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.166 [2024-11-21 04:53:55.649013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.733 04:53:56 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.733 04:53:56 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:39.733 04:53:56 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:39.733 04:53:56 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:39.733 04:53:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:39.733 04:53:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:39.733 04:53:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:39.733 04:53:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:39.733 04:53:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.733 04:53:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:39.990 04:53:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.990 04:53:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:39.991 04:53:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6bbeb732-9f78-4111-bb4c-462c2e794bf9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6bbeb732-9f78-4111-bb4c-462c2e794bf9",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "08effe47-a830-4361-b785-b5209c7cc5cc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "08effe47-a830-4361-b785-b5209c7cc5cc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6755e2f2-0bad-4649-9698-564fc91d19ea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6755e2f2-0bad-4649-9698-564fc91d19ea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "13162eff-98e6-4930-b2f0-015c9d4437fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "13162eff-98e6-4930-b2f0-015c9d4437fa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a3491d08-9d6e-4602-98f3-21bb33351053"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a3491d08-9d6e-4602-98f3-21bb33351053",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "858e7bf2-927e-4561-b7f1-24a6dc452b77"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "858e7bf2-927e-4561-b7f1-24a6dc452b77",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:39.991 04:53:56 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:40.249 04:53:56 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:40.249 04:53:56 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:40.249 04:53:56 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:40.249 04:53:56 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71916 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71916 ']' 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71916 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71916 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.249 killing process with pid 71916 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71916' 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71916 00:07:40.249 04:53:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71916 00:07:40.510 04:53:57 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:40.510 04:53:57 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.510 04:53:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:40.510 04:53:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.510 04:53:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.510 ************************************ 00:07:40.510 START TEST bdev_hello_world 00:07:40.510 ************************************ 00:07:40.510 04:53:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.510 [2024-11-21 04:53:57.149558] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:40.510 [2024-11-21 04:53:57.149699] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71983 ] 00:07:40.769 [2024-11-21 04:53:57.303097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.769 [2024-11-21 04:53:57.329707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.027 [2024-11-21 04:53:57.704709] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:41.027 [2024-11-21 04:53:57.704745] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:41.027 [2024-11-21 04:53:57.704760] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:41.027 [2024-11-21 04:53:57.706477] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:41.027 [2024-11-21 04:53:57.706883] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:41.027 [2024-11-21 04:53:57.706905] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:41.028 [2024-11-21 04:53:57.707017] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:41.028 00:07:41.028 [2024-11-21 04:53:57.707034] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:41.286 00:07:41.286 real 0m0.783s 00:07:41.286 user 0m0.506s 00:07:41.286 sys 0m0.175s 00:07:41.286 04:53:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.286 04:53:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.286 ************************************ 00:07:41.286 END TEST bdev_hello_world 00:07:41.286 ************************************ 00:07:41.286 04:53:57 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:41.286 04:53:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.286 04:53:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.286 04:53:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.286 ************************************ 00:07:41.286 START TEST bdev_bounds 00:07:41.286 ************************************ 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72014 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72014' 00:07:41.286 Process bdevio pid: 72014 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72014 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72014 ']' 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.286 04:53:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:41.287 [2024-11-21 04:53:57.968710] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:41.287 [2024-11-21 04:53:57.968818] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72014 ] 00:07:41.546 [2024-11-21 04:53:58.121399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.546 [2024-11-21 04:53:58.147393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.546 [2024-11-21 04:53:58.147637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.546 [2024-11-21 04:53:58.147739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.111 04:53:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.111 04:53:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:42.111 04:53:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:42.371 I/O targets: 00:07:42.371 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:42.371 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:42.371 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.371 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.371 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.371 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:42.371 00:07:42.371 00:07:42.371 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.371 http://cunit.sourceforge.net/ 00:07:42.371 00:07:42.371 00:07:42.371 Suite: bdevio tests on: Nvme3n1 00:07:42.371 Test: blockdev write read block ...passed 00:07:42.371 Test: blockdev write zeroes read block ...passed 00:07:42.371 Test: blockdev write zeroes read no split ...passed 00:07:42.371 Test: blockdev write zeroes read split ...passed 00:07:42.371 Test: blockdev write zeroes read split partial ...passed 00:07:42.371 Test: blockdev reset ...[2024-11-21 04:53:58.918163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:42.371 [2024-11-21 04:53:58.920102] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:42.371 passed 00:07:42.371 Test: blockdev write read 8 blocks ...passed 00:07:42.371 Test: blockdev write read size > 128k ...passed 00:07:42.371 Test: blockdev write read invalid size ...passed 00:07:42.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.371 Test: blockdev write read max offset ...passed 00:07:42.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.371 Test: blockdev writev readv 8 blocks ...passed 00:07:42.371 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.371 Test: blockdev writev readv block ...passed 00:07:42.371 Test: blockdev writev readv size > 128k ...passed 00:07:42.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.371 Test: blockdev comparev and writev ...[2024-11-21 04:53:58.926209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd206000 len:0x1000 00:07:42.371 [2024-11-21 04:53:58.926257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.371 Test: blockdev nvme passthru rw ...passed 00:07:42.371 Test: blockdev nvme passthru vendor specific ...[2024-11-21 04:53:58.926983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.371 [2024-11-21 04:53:58.927012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.371 Test: blockdev nvme admin passthru ...passed 00:07:42.371 Test: blockdev copy ...passed 00:07:42.371 Suite: bdevio tests on: Nvme2n3 00:07:42.371 Test: blockdev write read block ...passed 00:07:42.371 Test: blockdev write zeroes read block ...passed 00:07:42.371 Test: blockdev write zeroes read no split ...passed 00:07:42.371 Test: blockdev write zeroes read split ...passed 00:07:42.371 Test: blockdev write zeroes read split partial ...passed 00:07:42.371 Test: blockdev reset ...[2024-11-21 04:53:58.942311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.371 [2024-11-21 04:53:58.943970] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:42.371 passed 00:07:42.371 Test: blockdev write read 8 blocks ...passed 00:07:42.371 Test: blockdev write read size > 128k ...passed 00:07:42.371 Test: blockdev write read invalid size ...passed 00:07:42.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.371 Test: blockdev write read max offset ...passed 00:07:42.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.371 Test: blockdev writev readv 8 blocks ...passed 00:07:42.371 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.371 Test: blockdev writev readv block ...passed 00:07:42.371 Test: blockdev writev readv size > 128k ...passed 00:07:42.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.371 Test: blockdev comparev and writev ...[2024-11-21 04:53:58.950009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9405000 len:0x1000 00:07:42.371 [2024-11-21 04:53:58.950048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.371 Test: blockdev nvme passthru rw ...passed 00:07:42.371 Test: blockdev nvme passthru vendor specific ...[2024-11-21 04:53:58.950671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.371 [2024-11-21 04:53:58.950697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.371 Test: blockdev nvme admin passthru ...passed 00:07:42.371 Test: blockdev copy ...passed 00:07:42.371 Suite: bdevio tests on: Nvme2n2 00:07:42.371 Test: blockdev write read block ...passed 00:07:42.371 Test: blockdev write zeroes read block ...passed 00:07:42.371 Test: blockdev write zeroes read no split ...passed 00:07:42.371 Test: blockdev write zeroes read split ...passed 00:07:42.371 Test: blockdev write zeroes read split partial ...passed 00:07:42.371 Test: blockdev reset ...[2024-11-21 04:53:58.965628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.371 [2024-11-21 04:53:58.967103] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:42.371 passed 00:07:42.371 Test: blockdev write read 8 blocks ...passed 00:07:42.371 Test: blockdev write read size > 128k ...passed 00:07:42.371 Test: blockdev write read invalid size ...passed 00:07:42.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.371 Test: blockdev write read max offset ...passed 00:07:42.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.371 Test: blockdev writev readv 8 blocks ...passed 00:07:42.371 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.371 Test: blockdev writev readv block ...passed 00:07:42.371 Test: blockdev writev readv size > 128k ...passed 00:07:42.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.371 Test: blockdev comparev and writev ...[2024-11-21 04:53:58.973112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4a36000 len:0x1000 00:07:42.371 [2024-11-21 04:53:58.973148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.371 Test: blockdev nvme passthru rw ...passed 00:07:42.371 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.371 Test: blockdev nvme admin passthru ...[2024-11-21 04:53:58.973885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.371 [2024-11-21 04:53:58.973910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.371 Test: blockdev copy ...passed 00:07:42.371 Suite: bdevio tests on: Nvme2n1 00:07:42.371 Test: blockdev write read block ...passed 00:07:42.371 Test: blockdev write zeroes read block ...passed 00:07:42.371 Test: blockdev write zeroes read no split ...passed 00:07:42.371 Test: blockdev write zeroes read split ...passed 00:07:42.371 Test: blockdev write zeroes read split partial ...passed 00:07:42.371 Test: blockdev reset ...[2024-11-21 04:53:58.990845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:42.371 [2024-11-21 04:53:58.992442] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:42.371 passed 00:07:42.371 Test: blockdev write read 8 blocks ...passed 00:07:42.371 Test: blockdev write read size > 128k ...passed 00:07:42.371 Test: blockdev write read invalid size ...passed 00:07:42.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.371 Test: blockdev write read max offset ...passed 00:07:42.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.371 Test: blockdev writev readv 8 blocks ...passed 00:07:42.371 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.371 Test: blockdev writev readv block ...passed 00:07:42.371 Test: blockdev writev readv size > 128k ...passed 00:07:42.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.371 Test: blockdev comparev and writev ...[2024-11-21 04:53:58.998058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4a30000 len:0x1000 00:07:42.371 [2024-11-21 04:53:58.998104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.371 Test: blockdev nvme passthru rw ...passed 00:07:42.371 Test: blockdev nvme passthru vendor specific ...[2024-11-21 04:53:58.998713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.371 [2024-11-21 04:53:58.998740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.371 passed 00:07:42.372 Test: blockdev nvme admin passthru ...passed 00:07:42.372 Test: blockdev copy ...passed 00:07:42.372 Suite: bdevio tests on: Nvme1n1 00:07:42.372 Test: blockdev write read block ...passed 00:07:42.372 Test: blockdev write zeroes read block ...passed 00:07:42.372 Test: blockdev write zeroes read no split ...passed 00:07:42.372 Test: blockdev write zeroes read split ...passed 00:07:42.372 Test: blockdev write zeroes read split partial ...passed 00:07:42.372 Test: blockdev reset ...[2024-11-21 04:53:59.014436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:42.372 [2024-11-21 04:53:59.015782] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:42.372 passed 00:07:42.372 Test: blockdev write read 8 blocks ...passed 00:07:42.372 Test: blockdev write read size > 128k ...passed 00:07:42.372 Test: blockdev write read invalid size ...passed 00:07:42.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.372 Test: blockdev write read max offset ...passed 00:07:42.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.372 Test: blockdev writev readv 8 blocks ...passed 00:07:42.372 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.372 Test: blockdev writev readv block ...passed 00:07:42.372 Test: blockdev writev readv size > 128k ...passed 00:07:42.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.372 Test: blockdev comparev and writev ...[2024-11-21 04:53:59.023855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e4a2c000 len:0x1000 00:07:42.372 [2024-11-21 04:53:59.023971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.372 passed 00:07:42.372 Test: blockdev nvme passthru rw ...passed 00:07:42.372 Test: blockdev nvme passthru vendor specific ...[2024-11-21 04:53:59.024890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.372 [2024-11-21 04:53:59.024972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.372 passed 00:07:42.372 Test: blockdev nvme admin passthru ...passed 00:07:42.372 Test: blockdev copy ...passed 00:07:42.372 Suite: bdevio tests on: Nvme0n1 00:07:42.372 Test: blockdev write read block ...passed 00:07:42.372 Test: blockdev write zeroes read block ...passed 00:07:42.372 Test: blockdev write zeroes read no split ...passed 00:07:42.372 Test: blockdev write zeroes read split ...passed 00:07:42.372 Test: blockdev write zeroes read split partial ...passed 00:07:42.372 Test: blockdev reset ...[2024-11-21 04:53:59.050842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:42.372 [2024-11-21 04:53:59.052172] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:42.372 passed 00:07:42.372 Test: blockdev write read 8 blocks ...passed 00:07:42.372 Test: blockdev write read size > 128k ...passed 00:07:42.372 Test: blockdev write read invalid size ...passed 00:07:42.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.372 Test: blockdev write read max offset ...passed 00:07:42.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.372 Test: blockdev writev readv 8 blocks ...passed 00:07:42.372 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.372 Test: blockdev writev readv block ...passed 00:07:42.372 Test: blockdev writev readv size > 128k ...passed 00:07:42.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.372 Test: blockdev comparev and writev ...[2024-11-21 04:53:59.056269] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:42.372 separate metadata which is not supported yet. 00:07:42.372 passed 00:07:42.372 Test: blockdev nvme passthru rw ...passed 00:07:42.372 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.372 Test: blockdev nvme admin passthru ...[2024-11-21 04:53:59.056776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:42.372 [2024-11-21 04:53:59.056804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:42.372 passed 00:07:42.372 Test: blockdev copy ...passed 00:07:42.372 00:07:42.372 Run Summary: Type Total Ran Passed Failed Inactive 00:07:42.372 suites 6 6 n/a 0 0 00:07:42.372 tests 138 138 138 0 0 00:07:42.372 asserts 893 893 893 0 n/a 00:07:42.372 00:07:42.372 Elapsed time = 0.362 seconds 00:07:42.372 0 00:07:42.372 04:53:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72014 00:07:42.372 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72014 ']' 00:07:42.372 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72014 00:07:42.372 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:42.372 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.372 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72014 00:07:42.630 killing process with pid 72014 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72014' 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72014 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72014 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:42.630 ************************************ 00:07:42.630 END TEST bdev_bounds 00:07:42.630 ************************************ 00:07:42.630 00:07:42.630 real 0m1.344s 00:07:42.630 user 0m3.460s 00:07:42.630 sys 0m0.256s 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.630 04:53:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:42.630 04:53:59 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.630 04:53:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.630 04:53:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.630 04:53:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.630 ************************************ 00:07:42.630 START TEST bdev_nbd 00:07:42.630 ************************************ 00:07:42.630 04:53:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:42.630 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72063 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72063 /var/tmp/spdk-nbd.sock 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72063 ']' 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:42.631 04:53:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:42.889 [2024-11-21 04:53:59.374928] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:42.889 [2024-11-21 04:53:59.375048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.889 [2024-11-21 04:53:59.520851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.889 [2024-11-21 04:53:59.543978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:43.454 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.020 1+0 records in 00:07:44.020 1+0 records out 00:07:44.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431811 s, 9.5 MB/s 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.020 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.021 1+0 records in 00:07:44.021 1+0 records out 00:07:44.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533754 s, 7.7 MB/s 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.021 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.280 1+0 records in 00:07:44.280 1+0 records out 00:07:44.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801402 s, 5.1 MB/s 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.280 04:54:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.541 1+0 records in 00:07:44.541 1+0 records out 00:07:44.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000952745 s, 4.3 MB/s 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.541 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.836 1+0 records in 00:07:44.836 1+0 records out 00:07:44.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903779 s, 4.5 MB/s 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:44.836 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.109 1+0 records in 00:07:45.109 1+0 records out 00:07:45.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814769 s, 5.0 MB/s 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:45.109 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.370 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:45.370 { 00:07:45.370 "nbd_device": "/dev/nbd0", 00:07:45.370 "bdev_name": "Nvme0n1" 00:07:45.370 }, 00:07:45.370 { 00:07:45.370 "nbd_device": "/dev/nbd1", 00:07:45.370 "bdev_name": "Nvme1n1" 00:07:45.370 }, 00:07:45.370 { 00:07:45.370 "nbd_device": "/dev/nbd2", 00:07:45.370 "bdev_name": "Nvme2n1" 00:07:45.370 }, 00:07:45.370 { 00:07:45.370 "nbd_device": "/dev/nbd3", 00:07:45.370 "bdev_name": "Nvme2n2" 00:07:45.370 }, 00:07:45.370 { 00:07:45.370 "nbd_device": "/dev/nbd4", 00:07:45.370 "bdev_name": "Nvme2n3" 00:07:45.370 }, 00:07:45.370 { 00:07:45.370 "nbd_device": "/dev/nbd5", 00:07:45.370 "bdev_name": "Nvme3n1" 00:07:45.371 } 00:07:45.371 ]' 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:45.371 { 00:07:45.371 "nbd_device": "/dev/nbd0", 00:07:45.371 "bdev_name": "Nvme0n1" 00:07:45.371 }, 00:07:45.371 { 00:07:45.371 "nbd_device": "/dev/nbd1", 00:07:45.371 "bdev_name": "Nvme1n1" 00:07:45.371 }, 00:07:45.371 { 00:07:45.371 "nbd_device": "/dev/nbd2", 00:07:45.371 "bdev_name": "Nvme2n1" 00:07:45.371 }, 00:07:45.371 { 00:07:45.371 "nbd_device": "/dev/nbd3", 00:07:45.371 "bdev_name": "Nvme2n2" 00:07:45.371 }, 00:07:45.371 { 00:07:45.371 "nbd_device": "/dev/nbd4", 00:07:45.371 "bdev_name": "Nvme2n3" 00:07:45.371 }, 00:07:45.371 { 00:07:45.371 "nbd_device": "/dev/nbd5", 00:07:45.371 "bdev_name": "Nvme3n1" 00:07:45.371 } 00:07:45.371 ]' 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.371 04:54:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.371 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.633 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.895 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.154 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.415 04:54:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.677 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.937 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.937 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.937 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.937 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.937 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:46.938 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:47.199 /dev/nbd0 00:07:47.199 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.199 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.200 1+0 records in 00:07:47.200 1+0 records out 00:07:47.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00198524 s, 2.1 MB/s 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.200 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:47.200 /dev/nbd1 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:47.460 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.461 1+0 records in 00:07:47.461 1+0 records out 00:07:47.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598591 s, 6.8 MB/s 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.461 04:54:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:47.461 /dev/nbd10 00:07:47.720 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:47.720 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:47.720 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.721 1+0 records in 00:07:47.721 1+0 records out 00:07:47.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001077 s, 3.8 MB/s 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.721 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:47.981 /dev/nbd11 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.981 1+0 records in 00:07:47.981 1+0 records out 00:07:47.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00163713 s, 2.5 MB/s 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:47.981 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:47.981 /dev/nbd12 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.242 1+0 records in 00:07:48.242 1+0 records out 00:07:48.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739867 s, 5.5 MB/s 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:48.242 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:48.242 /dev/nbd13 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.502 1+0 records in 00:07:48.502 1+0 records out 00:07:48.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134997 s, 3.0 MB/s 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:48.502 04:54:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd0", 00:07:48.502 "bdev_name": "Nvme0n1" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd1", 00:07:48.502 "bdev_name": "Nvme1n1" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd10", 00:07:48.502 "bdev_name": "Nvme2n1" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd11", 00:07:48.502 "bdev_name": "Nvme2n2" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd12", 00:07:48.502 "bdev_name": "Nvme2n3" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd13", 00:07:48.502 "bdev_name": "Nvme3n1" 00:07:48.502 } 00:07:48.502 ]' 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd0", 00:07:48.502 "bdev_name": "Nvme0n1" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd1", 00:07:48.502 "bdev_name": "Nvme1n1" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd10", 00:07:48.502 "bdev_name": "Nvme2n1" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd11", 00:07:48.502 "bdev_name": "Nvme2n2" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd12", 00:07:48.502 "bdev_name": "Nvme2n3" 00:07:48.502 }, 00:07:48.502 { 00:07:48.502 "nbd_device": "/dev/nbd13", 00:07:48.502 "bdev_name": "Nvme3n1" 00:07:48.502 } 00:07:48.502 ]' 00:07:48.502 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:48.762 /dev/nbd1 00:07:48.762 /dev/nbd10 00:07:48.762 /dev/nbd11 00:07:48.762 /dev/nbd12 00:07:48.762 /dev/nbd13' 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:48.762 /dev/nbd1 00:07:48.762 /dev/nbd10 00:07:48.762 /dev/nbd11 00:07:48.762 /dev/nbd12 00:07:48.762 /dev/nbd13' 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:48.762 256+0 records in 00:07:48.762 256+0 records out 00:07:48.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00783856 s, 134 MB/s 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.762 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.022 256+0 records in 00:07:49.022 256+0 records out 00:07:49.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234605 s, 4.5 MB/s 00:07:49.022 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.022 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.022 256+0 records in 00:07:49.022 256+0 records out 00:07:49.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145621 s, 7.2 MB/s 00:07:49.022 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.022 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:49.282 256+0 records in 00:07:49.282 256+0 records out 00:07:49.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194478 s, 5.4 MB/s 00:07:49.282 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.282 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:49.282 256+0 records in 00:07:49.282 256+0 records out 00:07:49.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125089 s, 8.4 MB/s 00:07:49.282 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.282 04:54:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:49.543 256+0 records in 00:07:49.543 256+0 records out 00:07:49.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182286 s, 5.8 MB/s 00:07:49.543 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.543 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:49.804 256+0 records in 00:07:49.804 256+0 records out 00:07:49.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.241316 s, 4.3 MB/s 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:49.804 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.805 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.066 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.327 04:54:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.587 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.847 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.108 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.369 04:54:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.369 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.369 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.369 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:51.629 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:51.629 malloc_lvol_verify 00:07:51.891 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:51.891 f4b29a05-febe-4009-ab7e-27ad555e85df 00:07:51.891 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:52.152 f425cfc1-611d-4f68-ba8a-e1b1a85f5c34 00:07:52.152 04:54:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:52.414 /dev/nbd0 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:52.414 mke2fs 1.47.0 (5-Feb-2023) 00:07:52.414 Discarding device blocks: 0/4096 done 00:07:52.414 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:52.414 00:07:52.414 Allocating group tables: 0/1 done 00:07:52.414 Writing inode tables: 0/1 done 00:07:52.414 Creating journal (1024 blocks): done 00:07:52.414 Writing superblocks and filesystem accounting information: 0/1 done 00:07:52.414 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.414 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72063 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72063 ']' 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72063 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72063 00:07:52.674 killing process with pid 72063 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72063' 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72063 00:07:52.674 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72063 00:07:52.935 ************************************ 00:07:52.935 END TEST bdev_nbd 00:07:52.935 ************************************ 00:07:52.935 04:54:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:52.935 00:07:52.935 real 0m10.321s 00:07:52.935 user 0m14.396s 00:07:52.935 sys 0m3.563s 00:07:52.935 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.935 04:54:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.196 skipping fio tests on NVMe due to multi-ns failures. 00:07:53.196 04:54:09 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:53.196 04:54:09 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:53.196 04:54:09 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:53.196 04:54:09 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:53.196 04:54:09 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.196 04:54:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:53.196 04:54:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.196 04:54:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.196 ************************************ 00:07:53.196 START TEST bdev_verify 00:07:53.196 ************************************ 00:07:53.196 04:54:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.196 [2024-11-21 04:54:09.781514] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:53.196 [2024-11-21 04:54:09.781699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72441 ] 00:07:53.456 [2024-11-21 04:54:09.948411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.456 [2024-11-21 04:54:09.991666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.456 [2024-11-21 04:54:09.991702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.028 Running I/O for 5 seconds... 00:07:56.359 18688.00 IOPS, 73.00 MiB/s [2024-11-21T04:54:14.033Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-21T04:54:14.975Z] 19200.00 IOPS, 75.00 MiB/s [2024-11-21T04:54:15.918Z] 19056.00 IOPS, 74.44 MiB/s [2024-11-21T04:54:15.918Z] 19059.20 IOPS, 74.45 MiB/s 00:07:59.184 Latency(us) 00:07:59.184 [2024-11-21T04:54:15.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.184 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x0 length 0xbd0bd 00:07:59.184 Nvme0n1 : 5.04 1573.23 6.15 0.00 0.00 81125.45 18955.03 79046.50 00:07:59.184 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:59.184 Nvme0n1 : 5.04 1549.59 6.05 0.00 0.00 82331.07 21475.64 80256.39 00:07:59.184 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x0 length 0xa0000 00:07:59.184 Nvme1n1 : 5.05 1572.74 6.14 0.00 0.00 81026.60 22181.42 74206.92 00:07:59.184 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0xa0000 length 0xa0000 00:07:59.184 Nvme1n1 : 5.04 1549.13 6.05 0.00 0.00 82220.77 25004.50 77030.01 00:07:59.184 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x0 length 0x80000 00:07:59.184 Nvme2n1 : 5.05 1572.28 6.14 0.00 0.00 80867.01 20669.05 67350.84 00:07:59.184 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x80000 length 0x80000 00:07:59.184 Nvme2n1 : 5.08 1560.72 6.10 0.00 0.00 81386.33 14922.04 64931.05 00:07:59.184 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x0 length 0x80000 00:07:59.184 Nvme2n2 : 5.07 1577.45 6.16 0.00 0.00 80390.31 9628.75 65737.65 00:07:59.184 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x80000 length 0x80000 00:07:59.184 Nvme2n2 : 5.09 1560.26 6.09 0.00 0.00 81224.30 15123.69 65737.65 00:07:59.184 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x0 length 0x80000 00:07:59.184 Nvme2n3 : 5.09 1584.69 6.19 0.00 0.00 80048.12 13712.15 66140.95 00:07:59.184 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x80000 length 0x80000 00:07:59.184 Nvme2n3 : 5.09 1559.82 6.09 0.00 0.00 81123.81 15224.52 68157.44 00:07:59.184 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x0 length 0x20000 00:07:59.184 Nvme3n1 : 5.09 1583.90 6.19 0.00 0.00 79936.16 14922.04 68560.74 00:07:59.184 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.184 Verification LBA range: start 0x20000 length 0x20000 00:07:59.184 Nvme3n1 : 5.09 1559.37 6.09 0.00 0.00 81039.90 15325.34 66544.25 00:07:59.184 [2024-11-21T04:54:15.918Z] =================================================================================================================== 00:07:59.184 [2024-11-21T04:54:15.918Z] Total : 18803.19 73.45 0.00 0.00 81053.40 9628.75 80256.39 00:07:59.796 ************************************ 00:07:59.796 END TEST bdev_verify 00:07:59.796 ************************************ 00:07:59.796 00:07:59.796 real 0m6.594s 00:07:59.796 user 0m12.255s 00:07:59.796 sys 0m0.333s 00:07:59.796 04:54:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.796 04:54:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:59.796 04:54:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.796 04:54:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:59.796 04:54:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.796 04:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.796 ************************************ 00:07:59.796 START TEST bdev_verify_big_io 00:07:59.796 ************************************ 00:07:59.796 04:54:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:59.796 [2024-11-21 04:54:16.430428] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:07:59.796 [2024-11-21 04:54:16.430573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72535 ] 00:08:00.057 [2024-11-21 04:54:16.593581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.057 [2024-11-21 04:54:16.635815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.058 [2024-11-21 04:54:16.635904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.630 Running I/O for 5 seconds... 00:08:04.556 688.00 IOPS, 43.00 MiB/s [2024-11-21T04:54:23.186Z] 1687.50 IOPS, 105.47 MiB/s [2024-11-21T04:54:23.186Z] 1965.00 IOPS, 122.81 MiB/s 00:08:06.452 Latency(us) 00:08:06.452 [2024-11-21T04:54:23.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.452 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x0 length 0xbd0b 00:08:06.452 Nvme0n1 : 5.76 111.03 6.94 0.00 0.00 1121500.48 24702.03 1232480.10 00:08:06.452 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:06.452 Nvme0n1 : 5.70 132.50 8.28 0.00 0.00 932249.23 33473.77 1025991.29 00:08:06.452 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x0 length 0xa000 00:08:06.452 Nvme1n1 : 5.77 110.98 6.94 0.00 0.00 1085911.36 113730.17 1135688.47 00:08:06.452 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0xa000 length 0xa000 00:08:06.452 Nvme1n1 : 5.70 124.91 7.81 0.00 0.00 959684.37 91145.45 1619646.62 00:08:06.452 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x0 length 0x8000 00:08:06.452 Nvme2n1 : 5.77 110.94 6.93 0.00 0.00 1052469.09 132281.90 1077613.49 00:08:06.452 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x8000 length 0x8000 00:08:06.452 Nvme2n1 : 5.70 125.05 7.82 0.00 0.00 926202.35 91548.75 1645457.72 00:08:06.452 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x0 length 0x8000 00:08:06.452 Nvme2n2 : 5.89 112.54 7.03 0.00 0.00 1000335.91 119376.34 1103424.59 00:08:06.452 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x8000 length 0x8000 00:08:06.452 Nvme2n2 : 5.85 134.06 8.38 0.00 0.00 841904.83 78643.20 1690627.15 00:08:06.452 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x0 length 0x8000 00:08:06.452 Nvme2n3 : 5.91 126.84 7.93 0.00 0.00 879134.87 4814.38 1251838.42 00:08:06.452 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x8000 length 0x8000 00:08:06.452 Nvme2n3 : 5.90 142.78 8.92 0.00 0.00 769998.31 6402.36 1716438.25 00:08:06.452 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x0 length 0x2000 00:08:06.452 Nvme3n1 : 5.91 126.64 7.92 0.00 0.00 852095.55 4864.79 1277649.53 00:08:06.452 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.452 Verification LBA range: start 0x2000 length 0x2000 00:08:06.452 Nvme3n1 : 5.93 173.12 10.82 0.00 0.00 616392.55 894.82 1297007.85 00:08:06.452 [2024-11-21T04:54:23.186Z] =================================================================================================================== 00:08:06.452 [2024-11-21T04:54:23.186Z] Total : 1531.39 95.71 0.00 0.00 901450.00 894.82 1716438.25 00:08:07.385 00:08:07.385 real 0m7.586s 00:08:07.385 user 0m14.280s 00:08:07.385 sys 0m0.332s 00:08:07.385 04:54:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.385 ************************************ 00:08:07.385 END TEST bdev_verify_big_io 00:08:07.385 ************************************ 00:08:07.385 04:54:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:07.385 04:54:23 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.385 04:54:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:07.385 04:54:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.385 04:54:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.385 ************************************ 00:08:07.385 START TEST bdev_write_zeroes 00:08:07.385 ************************************ 00:08:07.385 04:54:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.385 [2024-11-21 04:54:24.081914] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:07.385 [2024-11-21 04:54:24.082087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72633 ] 00:08:07.650 [2024-11-21 04:54:24.252596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.650 [2024-11-21 04:54:24.278945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.219 Running I/O for 1 seconds... 00:08:09.155 60905.00 IOPS, 237.91 MiB/s 00:08:09.155 Latency(us) 00:08:09.155 [2024-11-21T04:54:25.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.155 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.155 Nvme0n1 : 1.02 10130.74 39.57 0.00 0.00 12609.89 5242.88 21778.12 00:08:09.155 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.155 Nvme1n1 : 1.02 10142.32 39.62 0.00 0.00 12576.34 9225.45 21576.47 00:08:09.155 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.155 Nvme2n1 : 1.02 10130.73 39.57 0.00 0.00 12552.72 7511.43 20971.52 00:08:09.155 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.155 Nvme2n2 : 1.02 10119.30 39.53 0.00 0.00 12542.24 7007.31 21072.34 00:08:09.155 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.155 Nvme2n3 : 1.03 10046.36 39.24 0.00 0.00 12606.07 6074.68 21072.34 00:08:09.155 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.155 Nvme3n1 : 1.03 10033.99 39.20 0.00 0.00 12599.26 7057.72 23189.66 00:08:09.155 [2024-11-21T04:54:25.889Z] =================================================================================================================== 00:08:09.155 [2024-11-21T04:54:25.889Z] Total : 60603.43 236.73 0.00 0.00 12581.03 5242.88 23189.66 00:08:09.416 00:08:09.416 real 0m1.937s 00:08:09.416 user 0m1.622s 00:08:09.416 sys 0m0.203s 00:08:09.416 04:54:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.416 ************************************ 00:08:09.416 END TEST bdev_write_zeroes 00:08:09.417 ************************************ 00:08:09.417 04:54:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:09.417 04:54:25 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.417 04:54:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:09.417 04:54:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.417 04:54:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.417 ************************************ 00:08:09.417 START TEST bdev_json_nonenclosed 00:08:09.417 ************************************ 00:08:09.417 04:54:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.417 [2024-11-21 04:54:26.088101] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:09.417 [2024-11-21 04:54:26.088285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72678 ] 00:08:09.677 [2024-11-21 04:54:26.248559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.677 [2024-11-21 04:54:26.294243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.677 [2024-11-21 04:54:26.294390] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:09.677 [2024-11-21 04:54:26.294420] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:09.677 [2024-11-21 04:54:26.294436] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.677 00:08:09.677 real 0m0.396s 00:08:09.677 user 0m0.160s 00:08:09.677 sys 0m0.132s 00:08:09.677 04:54:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.677 04:54:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:09.677 ************************************ 00:08:09.677 END TEST bdev_json_nonenclosed 00:08:09.677 ************************************ 00:08:09.937 04:54:26 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.937 04:54:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:09.937 04:54:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.937 04:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.937 ************************************ 00:08:09.937 START TEST bdev_json_nonarray 00:08:09.937 ************************************ 00:08:09.937 04:54:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.937 [2024-11-21 04:54:26.557674] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:09.937 [2024-11-21 04:54:26.557896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72698 ] 00:08:10.198 [2024-11-21 04:54:26.727498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.198 [2024-11-21 04:54:26.769775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.198 [2024-11-21 04:54:26.769937] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:10.198 [2024-11-21 04:54:26.769966] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:10.198 [2024-11-21 04:54:26.769986] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.198 00:08:10.198 real 0m0.403s 00:08:10.198 user 0m0.167s 00:08:10.198 sys 0m0.130s 00:08:10.198 04:54:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.198 ************************************ 00:08:10.198 END TEST bdev_json_nonarray 00:08:10.198 ************************************ 00:08:10.198 04:54:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:10.198 04:54:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:10.198 04:54:26 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:10.198 04:54:26 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:10.198 04:54:26 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:10.198 04:54:26 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:10.198 04:54:26 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:10.198 04:54:26 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:10.461 04:54:26 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:10.461 04:54:26 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:10.461 04:54:26 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:10.461 04:54:26 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:10.461 00:08:10.461 real 0m31.730s 00:08:10.461 user 0m48.816s 00:08:10.461 sys 0m5.938s 00:08:10.461 04:54:26 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.461 04:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.461 ************************************ 00:08:10.461 END TEST blockdev_nvme 00:08:10.461 ************************************ 00:08:10.461 04:54:26 -- spdk/autotest.sh@209 -- # uname -s 00:08:10.461 04:54:26 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:10.461 04:54:26 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:10.461 04:54:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.461 04:54:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.461 04:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:10.461 ************************************ 00:08:10.461 START TEST blockdev_nvme_gpt 00:08:10.461 ************************************ 00:08:10.461 04:54:26 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:10.461 * Looking for test storage... 00:08:10.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:10.461 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.461 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.461 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.461 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:10.461 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.462 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.462 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.462 04:54:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.462 --rc genhtml_branch_coverage=1 00:08:10.462 --rc genhtml_function_coverage=1 00:08:10.462 --rc genhtml_legend=1 00:08:10.462 --rc geninfo_all_blocks=1 00:08:10.462 --rc geninfo_unexecuted_blocks=1 00:08:10.462 00:08:10.462 ' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.462 --rc genhtml_branch_coverage=1 00:08:10.462 --rc genhtml_function_coverage=1 00:08:10.462 --rc genhtml_legend=1 00:08:10.462 --rc geninfo_all_blocks=1 00:08:10.462 --rc geninfo_unexecuted_blocks=1 00:08:10.462 00:08:10.462 ' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.462 --rc genhtml_branch_coverage=1 00:08:10.462 --rc genhtml_function_coverage=1 00:08:10.462 --rc genhtml_legend=1 00:08:10.462 --rc geninfo_all_blocks=1 00:08:10.462 --rc geninfo_unexecuted_blocks=1 00:08:10.462 00:08:10.462 ' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.462 --rc genhtml_branch_coverage=1 00:08:10.462 --rc genhtml_function_coverage=1 00:08:10.462 --rc genhtml_legend=1 00:08:10.462 --rc geninfo_all_blocks=1 00:08:10.462 --rc geninfo_unexecuted_blocks=1 00:08:10.462 00:08:10.462 ' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72782 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72782 00:08:10.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72782 ']' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.462 04:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.462 04:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:10.724 [2024-11-21 04:54:27.279271] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:10.724 [2024-11-21 04:54:27.279458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72782 ] 00:08:10.724 [2024-11-21 04:54:27.449240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.983 [2024-11-21 04:54:27.495338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.556 04:54:28 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.556 04:54:28 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:11.556 04:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:11.556 04:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:11.556 04:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:11.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.109 Waiting for block devices as requested 00:08:12.109 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.109 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.109 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.372 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:17.667 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:17.667 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:17.667 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:17.667 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:17.667 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:17.667 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:17.667 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:17.668 BYT; 00:08:17.668 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:17.668 BYT; 00:08:17.668 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:17.668 04:54:34 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:17.668 04:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:18.602 The operation has completed successfully. 00:08:18.602 04:54:35 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:19.538 The operation has completed successfully. 00:08:19.538 04:54:36 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:19.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:20.362 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.362 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.362 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.362 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.621 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:20.621 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.621 [] 00:08:20.621 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.621 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:20.621 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:20.621 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:20.621 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:20.621 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:20.621 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.621 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.881 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.881 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:20.881 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.881 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "360908b0-3b50-4cff-9edd-23b7e4706366"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "360908b0-3b50-4cff-9edd-23b7e4706366",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aea0ee69-89f6-46a2-98b7-4574239f1022"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aea0ee69-89f6-46a2-98b7-4574239f1022",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "13bbbb4e-fb36-43d1-a54d-1a80f8f68779"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "13bbbb4e-fb36-43d1-a54d-1a80f8f68779",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1da3e89a-d0c8-49c6-a994-c80c710c590f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1da3e89a-d0c8-49c6-a994-c80c710c590f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e2c909c3-9469-4b79-a409-4f0977a28dc8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e2c909c3-9469-4b79-a409-4f0977a28dc8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:20.882 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72782 00:08:20.882 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72782 ']' 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72782 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72782 00:08:20.883 killing process with pid 72782 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72782' 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72782 00:08:20.883 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72782 00:08:21.449 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:21.449 04:54:37 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:21.449 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:21.449 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.449 04:54:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.449 ************************************ 00:08:21.449 START TEST bdev_hello_world 00:08:21.449 ************************************ 00:08:21.449 04:54:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:21.449 [2024-11-21 04:54:37.992088] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:21.449 [2024-11-21 04:54:37.992372] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73385 ] 00:08:21.449 [2024-11-21 04:54:38.145578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.449 [2024-11-21 04:54:38.168136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.019 [2024-11-21 04:54:38.545491] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:22.019 [2024-11-21 04:54:38.545539] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:22.019 [2024-11-21 04:54:38.545560] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:22.019 [2024-11-21 04:54:38.547815] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:22.019 [2024-11-21 04:54:38.548630] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:22.019 [2024-11-21 04:54:38.548659] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:22.019 [2024-11-21 04:54:38.549623] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:22.019 00:08:22.019 [2024-11-21 04:54:38.549659] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:22.019 00:08:22.019 real 0m0.800s 00:08:22.019 user 0m0.524s 00:08:22.019 sys 0m0.174s 00:08:22.019 04:54:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.019 ************************************ 00:08:22.019 04:54:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:22.019 END TEST bdev_hello_world 00:08:22.019 ************************************ 00:08:22.281 04:54:38 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:22.281 04:54:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.281 04:54:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.281 04:54:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:22.281 ************************************ 00:08:22.281 START TEST bdev_bounds 00:08:22.281 ************************************ 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:22.281 Process bdevio pid: 73416 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73416 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73416' 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73416 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73416 ']' 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:22.281 04:54:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:22.281 [2024-11-21 04:54:38.854395] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:22.281 [2024-11-21 04:54:38.854529] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73416 ] 00:08:22.281 [2024-11-21 04:54:39.012897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.542 [2024-11-21 04:54:39.039497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.542 [2024-11-21 04:54:39.039766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.542 [2024-11-21 04:54:39.039780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.116 04:54:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.116 04:54:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:23.116 04:54:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:23.116 I/O targets: 00:08:23.116 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:23.116 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:23.116 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:23.116 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:23.116 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:23.116 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:23.116 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:23.116 00:08:23.116 00:08:23.116 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.116 http://cunit.sourceforge.net/ 00:08:23.116 00:08:23.116 00:08:23.116 Suite: bdevio tests on: Nvme3n1 00:08:23.116 Test: blockdev write read block ...passed 00:08:23.116 Test: blockdev write zeroes read block ...passed 00:08:23.116 Test: blockdev write zeroes read no split ...passed 00:08:23.116 Test: blockdev write zeroes read split ...passed 00:08:23.116 Test: blockdev write zeroes read split partial ...passed 00:08:23.116 Test: blockdev reset ...[2024-11-21 04:54:39.795420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:23.116 [2024-11-21 04:54:39.797846] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:23.116 Test: blockdev write read 8 blocks ...uccessful. 00:08:23.116 passed 00:08:23.116 Test: blockdev write read size > 128k ...passed 00:08:23.116 Test: blockdev write read invalid size ...passed 00:08:23.116 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.116 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.116 Test: blockdev write read max offset ...passed 00:08:23.116 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.116 Test: blockdev writev readv 8 blocks ...passed 00:08:23.116 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.116 Test: blockdev writev readv block ...passed 00:08:23.116 Test: blockdev writev readv size > 128k ...passed 00:08:23.116 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.116 Test: blockdev comparev and writev ...[2024-11-21 04:54:39.815022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c720e000 len:0x1000 00:08:23.116 [2024-11-21 04:54:39.815071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.116 passed 00:08:23.116 Test: blockdev nvme passthru rw ...passed 00:08:23.116 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.116 Test: blockdev nvme admin passthru ...[2024-11-21 04:54:39.816993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.116 [2024-11-21 04:54:39.817030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.116 passed 00:08:23.116 Test: blockdev copy ...passed 00:08:23.116 Suite: bdevio tests on: Nvme2n3 00:08:23.116 Test: blockdev write read block ...passed 00:08:23.116 Test: blockdev write zeroes read block ...passed 00:08:23.116 Test: blockdev write zeroes read no split ...passed 00:08:23.116 Test: blockdev write zeroes read split ...passed 00:08:23.116 Test: blockdev write zeroes read split partial ...passed 00:08:23.116 Test: blockdev reset ...[2024-11-21 04:54:39.846204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:23.378 [2024-11-21 04:54:39.849678] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:23.378 passed 00:08:23.378 Test: blockdev write read 8 blocks ...passed 00:08:23.378 Test: blockdev write read size > 128k ...passed 00:08:23.378 Test: blockdev write read invalid size ...passed 00:08:23.378 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.378 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.378 Test: blockdev write read max offset ...passed 00:08:23.378 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.378 Test: blockdev writev readv 8 blocks ...passed 00:08:23.378 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.378 Test: blockdev writev readv block ...passed 00:08:23.378 Test: blockdev writev readv size > 128k ...passed 00:08:23.378 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.378 Test: blockdev comparev and writev ...[2024-11-21 04:54:39.858765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c720a000 len:0x1000 00:08:23.378 [2024-11-21 04:54:39.858805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.378 passed 00:08:23.378 Test: blockdev nvme passthru rw ...passed 00:08:23.379 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.379 Test: blockdev nvme admin passthru ...[2024-11-21 04:54:39.860026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.379 [2024-11-21 04:54:39.860053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.379 passed 00:08:23.379 Test: blockdev copy ...passed 00:08:23.379 Suite: bdevio tests on: Nvme2n2 00:08:23.379 Test: blockdev write read block ...passed 00:08:23.379 Test: blockdev write zeroes read block ...passed 00:08:23.379 Test: blockdev write zeroes read no split ...passed 00:08:23.379 Test: blockdev write zeroes read split ...passed 00:08:23.379 Test: blockdev write zeroes read split partial ...passed 00:08:23.379 Test: blockdev reset ...[2024-11-21 04:54:39.880116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:23.379 passed 00:08:23.379 Test: blockdev write read 8 blocks ...[2024-11-21 04:54:39.883005] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:23.379 passed 00:08:23.379 Test: blockdev write read size > 128k ...passed 00:08:23.379 Test: blockdev write read invalid size ...passed 00:08:23.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.379 Test: blockdev write read max offset ...passed 00:08:23.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.379 Test: blockdev writev readv 8 blocks ...passed 00:08:23.379 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.379 Test: blockdev writev readv block ...passed 00:08:23.379 Test: blockdev writev readv size > 128k ...passed 00:08:23.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.379 Test: blockdev comparev and writev ...[2024-11-21 04:54:39.891220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1005000 len:0x1000 00:08:23.379 [2024-11-21 04:54:39.891258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.379 passed 00:08:23.379 Test: blockdev nvme passthru rw ...passed 00:08:23.379 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.379 Test: blockdev nvme admin passthru ...[2024-11-21 04:54:39.892264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:23.379 [2024-11-21 04:54:39.892290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.379 passed 00:08:23.379 Test: blockdev copy ...passed 00:08:23.379 Suite: bdevio tests on: Nvme2n1 00:08:23.379 Test: blockdev write read block ...passed 00:08:23.379 Test: blockdev write zeroes read block ...passed 00:08:23.379 Test: blockdev write zeroes read no split ...passed 00:08:23.379 Test: blockdev write zeroes read split ...passed 00:08:23.379 Test: blockdev write zeroes read split partial ...passed 00:08:23.379 Test: blockdev reset ...[2024-11-21 04:54:39.917247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:23.379 passed 00:08:23.379 Test: blockdev write read 8 blocks ...[2024-11-21 04:54:39.923150] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:23.379 passed 00:08:23.379 Test: blockdev write read size > 128k ...passed 00:08:23.379 Test: blockdev write read invalid size ...passed 00:08:23.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.379 Test: blockdev write read max offset ...passed 00:08:23.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.379 Test: blockdev writev readv 8 blocks ...passed 00:08:23.379 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.379 Test: blockdev writev readv block ...passed 00:08:23.379 Test: blockdev writev readv size > 128k ...passed 00:08:23.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.379 Test: blockdev comparev and writev ...[2024-11-21 04:54:39.937441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7602000 len:0x1000 00:08:23.379 [2024-11-21 04:54:39.937479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.379 passed 00:08:23.379 Test: blockdev nvme passthru rw ...passed 00:08:23.379 Test: blockdev nvme passthru vendor specific ...[2024-11-21 04:54:39.939949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:08:23.379 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:23.379 [2024-11-21 04:54:39.940069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:23.379 passed 00:08:23.379 Test: blockdev copy ...passed 00:08:23.379 Suite: bdevio tests on: Nvme1n1p2 00:08:23.379 Test: blockdev write read block ...passed 00:08:23.379 Test: blockdev write zeroes read block ...passed 00:08:23.379 Test: blockdev write zeroes read no split ...passed 00:08:23.379 Test: blockdev write zeroes read split ...passed 00:08:23.379 Test: blockdev write zeroes read split partial ...passed 00:08:23.379 Test: blockdev reset ...[2024-11-21 04:54:39.967101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:23.379 passed 00:08:23.379 Test: blockdev write read 8 blocks ...[2024-11-21 04:54:39.970418] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:23.379 passed 00:08:23.379 Test: blockdev write read size > 128k ...passed 00:08:23.379 Test: blockdev write read invalid size ...passed 00:08:23.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.379 Test: blockdev write read max offset ...passed 00:08:23.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.379 Test: blockdev writev readv 8 blocks ...passed 00:08:23.379 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.379 Test: blockdev writev readv block ...passed 00:08:23.379 Test: blockdev writev readv size > 128k ...passed 00:08:23.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.379 Test: blockdev comparev and writev ...[2024-11-21 04:54:39.984594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e2e3b000 len:0x1000 00:08:23.379 [2024-11-21 04:54:39.984642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.379 passed 00:08:23.379 Test: blockdev nvme passthru rw ...passed 00:08:23.379 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.379 Test: blockdev nvme admin passthru ...passed 00:08:23.379 Test: blockdev copy ...passed 00:08:23.379 Suite: bdevio tests on: Nvme1n1p1 00:08:23.379 Test: blockdev write read block ...passed 00:08:23.379 Test: blockdev write zeroes read block ...passed 00:08:23.379 Test: blockdev write zeroes read no split ...passed 00:08:23.379 Test: blockdev write zeroes read split ...passed 00:08:23.379 Test: blockdev write zeroes read split partial ...passed 00:08:23.379 Test: blockdev reset ...[2024-11-21 04:54:40.002113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:23.379 [2024-11-21 04:54:40.005309] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:23.379 Test: blockdev write read 8 blocks ...uccessful. 00:08:23.379 passed 00:08:23.379 Test: blockdev write read size > 128k ...passed 00:08:23.379 Test: blockdev write read invalid size ...passed 00:08:23.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.379 Test: blockdev write read max offset ...passed 00:08:23.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.379 Test: blockdev writev readv 8 blocks ...passed 00:08:23.379 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.379 Test: blockdev writev readv block ...passed 00:08:23.379 Test: blockdev writev readv size > 128k ...passed 00:08:23.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.379 Test: blockdev comparev and writev ...[2024-11-21 04:54:40.018571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e2e37000 len:0x1000 00:08:23.379 [2024-11-21 04:54:40.018623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:23.379 passed 00:08:23.379 Test: blockdev nvme passthru rw ...passed 00:08:23.379 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.379 Test: blockdev nvme admin passthru ...passed 00:08:23.379 Test: blockdev copy ...passed 00:08:23.379 Suite: bdevio tests on: Nvme0n1 00:08:23.379 Test: blockdev write read block ...passed 00:08:23.379 Test: blockdev write zeroes read block ...passed 00:08:23.379 Test: blockdev write zeroes read no split ...passed 00:08:23.379 Test: blockdev write zeroes read split ...passed 00:08:23.379 Test: blockdev write zeroes read split partial ...passed 00:08:23.379 Test: blockdev reset ...[2024-11-21 04:54:40.036290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:23.379 passed 00:08:23.379 Test: blockdev write read 8 blocks ...[2024-11-21 04:54:40.037976] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:23.379 passed 00:08:23.379 Test: blockdev write read size > 128k ...passed 00:08:23.379 Test: blockdev write read invalid size ...passed 00:08:23.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:23.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:23.379 Test: blockdev write read max offset ...passed 00:08:23.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:23.380 Test: blockdev writev readv 8 blocks ...passed 00:08:23.380 Test: blockdev writev readv 30 x 1block ...passed 00:08:23.380 Test: blockdev writev readv block ...passed 00:08:23.380 Test: blockdev writev readv size > 128k ...passed 00:08:23.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:23.380 Test: blockdev comparev and writev ...passed 00:08:23.380 Test: blockdev nvme passthru rw ...[2024-11-21 04:54:40.049507] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:23.380 separate metadata which is not supported yet. 00:08:23.380 passed 00:08:23.380 Test: blockdev nvme passthru vendor specific ...passed 00:08:23.380 Test: blockdev nvme admin passthru ...[2024-11-21 04:54:40.051135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:23.380 [2024-11-21 04:54:40.051174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:23.380 passed 00:08:23.380 Test: blockdev copy ...passed 00:08:23.380 00:08:23.380 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.380 suites 7 7 n/a 0 0 00:08:23.380 tests 161 161 161 0 0 00:08:23.380 asserts 1025 1025 1025 0 n/a 00:08:23.380 00:08:23.380 Elapsed time = 0.626 seconds 00:08:23.380 0 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73416 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73416 ']' 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73416 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73416 00:08:23.380 killing process with pid 73416 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73416' 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73416 00:08:23.380 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73416 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:23.641 00:08:23.641 real 0m1.481s 00:08:23.641 user 0m3.677s 00:08:23.641 sys 0m0.282s 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.641 ************************************ 00:08:23.641 END TEST bdev_bounds 00:08:23.641 ************************************ 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:23.641 04:54:40 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:23.641 04:54:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:23.641 04:54:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.641 04:54:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:23.641 ************************************ 00:08:23.641 START TEST bdev_nbd 00:08:23.641 ************************************ 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73465 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73465 /var/tmp/spdk-nbd.sock 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73465 ']' 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.641 04:54:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:23.901 [2024-11-21 04:54:40.401726] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:23.901 [2024-11-21 04:54:40.401857] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.901 [2024-11-21 04:54:40.563703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.901 [2024-11-21 04:54:40.588264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.518 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.519 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.780 1+0 records in 00:08:24.780 1+0 records out 00:08:24.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103121 s, 4.0 MB/s 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:24.780 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.041 1+0 records in 00:08:25.041 1+0 records out 00:08:25.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00093514 s, 4.4 MB/s 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.041 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.304 1+0 records in 00:08:25.304 1+0 records out 00:08:25.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864228 s, 4.7 MB/s 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.304 04:54:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.565 1+0 records in 00:08:25.565 1+0 records out 00:08:25.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106031 s, 3.9 MB/s 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:25.565 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.566 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.566 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.830 1+0 records in 00:08:25.830 1+0 records out 00:08:25.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108996 s, 3.8 MB/s 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:25.830 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.090 1+0 records in 00:08:26.090 1+0 records out 00:08:26.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151409 s, 2.7 MB/s 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.090 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:26.351 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.352 1+0 records in 00:08:26.352 1+0 records out 00:08:26.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864986 s, 4.7 MB/s 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:26.352 04:54:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd0", 00:08:26.613 "bdev_name": "Nvme0n1" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd1", 00:08:26.613 "bdev_name": "Nvme1n1p1" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd2", 00:08:26.613 "bdev_name": "Nvme1n1p2" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd3", 00:08:26.613 "bdev_name": "Nvme2n1" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd4", 00:08:26.613 "bdev_name": "Nvme2n2" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd5", 00:08:26.613 "bdev_name": "Nvme2n3" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd6", 00:08:26.613 "bdev_name": "Nvme3n1" 00:08:26.613 } 00:08:26.613 ]' 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd0", 00:08:26.613 "bdev_name": "Nvme0n1" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd1", 00:08:26.613 "bdev_name": "Nvme1n1p1" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd2", 00:08:26.613 "bdev_name": "Nvme1n1p2" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd3", 00:08:26.613 "bdev_name": "Nvme2n1" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd4", 00:08:26.613 "bdev_name": "Nvme2n2" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd5", 00:08:26.613 "bdev_name": "Nvme2n3" 00:08:26.613 }, 00:08:26.613 { 00:08:26.613 "nbd_device": "/dev/nbd6", 00:08:26.613 "bdev_name": "Nvme3n1" 00:08:26.613 } 00:08:26.613 ]' 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.613 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.873 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.134 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.393 04:54:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.653 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.913 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.174 04:54:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:28.434 /dev/nbd0 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.434 1+0 records in 00:08:28.434 1+0 records out 00:08:28.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120082 s, 3.4 MB/s 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.434 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:28.694 /dev/nbd1 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.694 1+0 records in 00:08:28.694 1+0 records out 00:08:28.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106385 s, 3.9 MB/s 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.694 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:28.953 /dev/nbd10 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.953 1+0 records in 00:08:28.953 1+0 records out 00:08:28.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944459 s, 4.3 MB/s 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:28.953 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:29.213 /dev/nbd11 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.213 1+0 records in 00:08:29.213 1+0 records out 00:08:29.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00093501 s, 4.4 MB/s 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.213 04:54:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:29.475 /dev/nbd12 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.475 1+0 records in 00:08:29.475 1+0 records out 00:08:29.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012391 s, 3.3 MB/s 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.475 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:29.736 /dev/nbd13 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.736 1+0 records in 00:08:29.736 1+0 records out 00:08:29.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000953219 s, 4.3 MB/s 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.736 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:29.998 /dev/nbd14 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.998 1+0 records in 00:08:29.998 1+0 records out 00:08:29.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903643 s, 4.5 MB/s 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.998 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd0", 00:08:30.260 "bdev_name": "Nvme0n1" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd1", 00:08:30.260 "bdev_name": "Nvme1n1p1" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd10", 00:08:30.260 "bdev_name": "Nvme1n1p2" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd11", 00:08:30.260 "bdev_name": "Nvme2n1" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd12", 00:08:30.260 "bdev_name": "Nvme2n2" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd13", 00:08:30.260 "bdev_name": "Nvme2n3" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd14", 00:08:30.260 "bdev_name": "Nvme3n1" 00:08:30.260 } 00:08:30.260 ]' 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd0", 00:08:30.260 "bdev_name": "Nvme0n1" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd1", 00:08:30.260 "bdev_name": "Nvme1n1p1" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd10", 00:08:30.260 "bdev_name": "Nvme1n1p2" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd11", 00:08:30.260 "bdev_name": "Nvme2n1" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd12", 00:08:30.260 "bdev_name": "Nvme2n2" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd13", 00:08:30.260 "bdev_name": "Nvme2n3" 00:08:30.260 }, 00:08:30.260 { 00:08:30.260 "nbd_device": "/dev/nbd14", 00:08:30.260 "bdev_name": "Nvme3n1" 00:08:30.260 } 00:08:30.260 ]' 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:30.260 /dev/nbd1 00:08:30.260 /dev/nbd10 00:08:30.260 /dev/nbd11 00:08:30.260 /dev/nbd12 00:08:30.260 /dev/nbd13 00:08:30.260 /dev/nbd14' 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:30.260 /dev/nbd1 00:08:30.260 /dev/nbd10 00:08:30.260 /dev/nbd11 00:08:30.260 /dev/nbd12 00:08:30.260 /dev/nbd13 00:08:30.260 /dev/nbd14' 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:30.260 256+0 records in 00:08:30.260 256+0 records out 00:08:30.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630393 s, 166 MB/s 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.260 04:54:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:30.521 256+0 records in 00:08:30.521 256+0 records out 00:08:30.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22051 s, 4.8 MB/s 00:08:30.521 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.521 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:30.783 256+0 records in 00:08:30.783 256+0 records out 00:08:30.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.253439 s, 4.1 MB/s 00:08:30.783 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.783 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:31.046 256+0 records in 00:08:31.046 256+0 records out 00:08:31.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2363 s, 4.4 MB/s 00:08:31.046 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.046 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:31.046 256+0 records in 00:08:31.046 256+0 records out 00:08:31.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200685 s, 5.2 MB/s 00:08:31.046 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.046 04:54:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:31.618 256+0 records in 00:08:31.618 256+0 records out 00:08:31.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.355751 s, 2.9 MB/s 00:08:31.618 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.618 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:31.879 256+0 records in 00:08:31.879 256+0 records out 00:08:31.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.2582 s, 4.1 MB/s 00:08:31.879 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.879 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:32.141 256+0 records in 00:08:32.141 256+0 records out 00:08:32.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.262587 s, 4.0 MB/s 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.141 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.142 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.404 04:54:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.665 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.666 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.927 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.188 04:54:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:33.449 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.711 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:33.973 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:34.235 malloc_lvol_verify 00:08:34.235 04:54:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:34.496 c37d1d33-ccab-493e-a1f3-82f0418df03f 00:08:34.496 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:34.755 2e354b46-c42d-42ee-aac9-74a42bf2010a 00:08:34.755 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:35.058 /dev/nbd0 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:35.058 mke2fs 1.47.0 (5-Feb-2023) 00:08:35.058 Discarding device blocks: 0/4096 done 00:08:35.058 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:35.058 00:08:35.058 Allocating group tables: 0/1 done 00:08:35.058 Writing inode tables: 0/1 done 00:08:35.058 Creating journal (1024 blocks): done 00:08:35.058 Writing superblocks and filesystem accounting information: 0/1 done 00:08:35.058 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:35.058 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73465 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73465 ']' 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73465 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73465 00:08:35.339 killing process with pid 73465 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73465' 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73465 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73465 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:35.339 00:08:35.339 real 0m11.647s 00:08:35.339 user 0m15.835s 00:08:35.339 sys 0m4.208s 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.339 ************************************ 00:08:35.339 END TEST bdev_nbd 00:08:35.339 ************************************ 00:08:35.339 04:54:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:35.339 04:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:35.339 04:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:35.339 04:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:35.339 skipping fio tests on NVMe due to multi-ns failures. 00:08:35.339 04:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:35.339 04:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:35.339 04:54:52 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:35.339 04:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:35.339 04:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.339 04:54:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.339 ************************************ 00:08:35.339 START TEST bdev_verify 00:08:35.339 ************************************ 00:08:35.339 04:54:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:35.598 [2024-11-21 04:54:52.110870] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:35.598 [2024-11-21 04:54:52.111011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73885 ] 00:08:35.598 [2024-11-21 04:54:52.267940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:35.598 [2024-11-21 04:54:52.305214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.598 [2024-11-21 04:54:52.305298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.165 Running I/O for 5 seconds... 00:08:38.479 24576.00 IOPS, 96.00 MiB/s [2024-11-21T04:54:56.155Z] 23936.00 IOPS, 93.50 MiB/s [2024-11-21T04:54:57.096Z] 22954.67 IOPS, 89.67 MiB/s [2024-11-21T04:54:58.038Z] 22368.00 IOPS, 87.38 MiB/s [2024-11-21T04:54:58.038Z] 21772.80 IOPS, 85.05 MiB/s 00:08:41.304 Latency(us) 00:08:41.304 [2024-11-21T04:54:58.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.304 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x0 length 0xbd0bd 00:08:41.304 Nvme0n1 : 5.07 1591.59 6.22 0.00 0.00 80223.51 14922.04 72593.72 00:08:41.304 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:41.304 Nvme0n1 : 5.08 1485.38 5.80 0.00 0.00 85972.10 19660.80 75416.81 00:08:41.304 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x0 length 0x4ff80 00:08:41.304 Nvme1n1p1 : 5.07 1590.55 6.21 0.00 0.00 80149.38 16434.41 68157.44 00:08:41.304 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:41.304 Nvme1n1p1 : 5.09 1484.88 5.80 0.00 0.00 85852.01 21072.34 68560.74 00:08:41.304 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x0 length 0x4ff7f 00:08:41.304 Nvme1n1p2 : 5.07 1590.03 6.21 0.00 0.00 80030.52 18450.90 70577.23 00:08:41.304 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:41.304 Nvme1n1p2 : 5.09 1483.97 5.80 0.00 0.00 85728.40 21878.94 67350.84 00:08:41.304 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x0 length 0x80000 00:08:41.304 Nvme2n1 : 5.07 1589.55 6.21 0.00 0.00 79900.03 19358.33 70577.23 00:08:41.304 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x80000 length 0x80000 00:08:41.304 Nvme2n1 : 5.09 1483.55 5.80 0.00 0.00 85567.18 21979.77 66140.95 00:08:41.304 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x0 length 0x80000 00:08:41.304 Nvme2n2 : 5.07 1589.08 6.21 0.00 0.00 79797.14 18047.61 72190.42 00:08:41.304 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x80000 length 0x80000 00:08:41.304 Nvme2n2 : 5.09 1483.13 5.79 0.00 0.00 85412.87 20366.57 66947.54 00:08:41.304 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x0 length 0x80000 00:08:41.304 Nvme2n3 : 5.08 1588.58 6.21 0.00 0.00 79690.66 12199.78 72593.72 00:08:41.304 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x80000 length 0x80000 00:08:41.304 Nvme2n3 : 5.09 1482.73 5.79 0.00 0.00 85257.15 14115.45 69770.63 00:08:41.304 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x0 length 0x20000 00:08:41.304 Nvme3n1 : 5.08 1598.92 6.25 0.00 0.00 79147.37 3856.54 75013.51 00:08:41.304 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:41.304 Verification LBA range: start 0x20000 length 0x20000 00:08:41.304 Nvme3n1 : 5.09 1482.32 5.79 0.00 0.00 85178.37 9981.64 71383.83 00:08:41.304 [2024-11-21T04:54:58.038Z] =================================================================================================================== 00:08:41.304 [2024-11-21T04:54:58.038Z] Total : 21524.26 84.08 0.00 0.00 82611.85 3856.54 75416.81 00:08:42.691 00:08:42.691 real 0m7.110s 00:08:42.691 user 0m13.382s 00:08:42.691 sys 0m0.264s 00:08:42.691 04:54:59 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.691 ************************************ 00:08:42.691 END TEST bdev_verify 00:08:42.691 ************************************ 00:08:42.691 04:54:59 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:42.691 04:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:42.691 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:42.691 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.691 04:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:42.691 ************************************ 00:08:42.691 START TEST bdev_verify_big_io 00:08:42.691 ************************************ 00:08:42.691 04:54:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:42.691 [2024-11-21 04:54:59.300266] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:42.691 [2024-11-21 04:54:59.300429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73978 ] 00:08:42.953 [2024-11-21 04:54:59.464865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.953 [2024-11-21 04:54:59.507293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.953 [2024-11-21 04:54:59.507340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.526 Running I/O for 5 seconds... 00:08:47.380 1221.00 IOPS, 76.31 MiB/s [2024-11-21T04:55:06.046Z] 2328.00 IOPS, 145.50 MiB/s [2024-11-21T04:55:06.308Z] 2481.00 IOPS, 155.06 MiB/s 00:08:49.574 Latency(us) 00:08:49.574 [2024-11-21T04:55:06.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.574 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x0 length 0xbd0b 00:08:49.574 Nvme0n1 : 5.77 116.12 7.26 0.00 0.00 1055125.39 24097.08 1135688.47 00:08:49.574 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:49.574 Nvme0n1 : 5.92 124.60 7.79 0.00 0.00 965134.53 23189.66 1142141.24 00:08:49.574 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x0 length 0x4ff8 00:08:49.574 Nvme1n1p1 : 5.77 110.86 6.93 0.00 0.00 1068899.41 66947.54 1109877.37 00:08:49.574 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:49.574 Nvme1n1p1 : 5.77 123.06 7.69 0.00 0.00 964312.18 79853.10 974369.08 00:08:49.574 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x0 length 0x4ff7 00:08:49.574 Nvme1n1p2 : 5.84 117.37 7.34 0.00 0.00 991558.43 108890.58 1122782.92 00:08:49.574 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:49.574 Nvme1n1p2 : 5.92 126.91 7.93 0.00 0.00 913399.06 106470.79 980821.86 00:08:49.574 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x0 length 0x8000 00:08:49.574 Nvme2n1 : 5.91 126.71 7.92 0.00 0.00 898306.40 61301.37 1084066.26 00:08:49.574 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x8000 length 0x8000 00:08:49.574 Nvme2n1 : 5.98 124.83 7.80 0.00 0.00 908908.40 63721.16 1490591.11 00:08:49.574 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x0 length 0x8000 00:08:49.574 Nvme2n2 : 5.91 129.88 8.12 0.00 0.00 856240.44 73400.32 1109877.37 00:08:49.574 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x8000 length 0x8000 00:08:49.574 Nvme2n2 : 6.02 131.48 8.22 0.00 0.00 843954.10 13409.67 1503496.66 00:08:49.574 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x0 length 0x8000 00:08:49.574 Nvme2n3 : 5.97 139.26 8.70 0.00 0.00 778477.64 33675.42 1129235.69 00:08:49.574 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x8000 length 0x8000 00:08:49.574 Nvme2n3 : 6.03 135.88 8.49 0.00 0.00 792896.70 16232.76 1522854.99 00:08:49.574 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x0 length 0x2000 00:08:49.574 Nvme3n1 : 6.00 154.29 9.64 0.00 0.00 683788.83 3806.13 1148594.02 00:08:49.574 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.574 Verification LBA range: start 0x2000 length 0x2000 00:08:49.574 Nvme3n1 : 6.04 151.88 9.49 0.00 0.00 690136.06 3112.96 1535760.54 00:08:49.574 [2024-11-21T04:55:06.308Z] =================================================================================================================== 00:08:49.574 [2024-11-21T04:55:06.308Z] Total : 1813.14 113.32 0.00 0.00 874322.80 3112.96 1535760.54 00:08:50.510 00:08:50.510 real 0m7.833s 00:08:50.510 user 0m14.747s 00:08:50.510 sys 0m0.350s 00:08:50.510 04:55:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.510 04:55:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:50.510 ************************************ 00:08:50.510 END TEST bdev_verify_big_io 00:08:50.510 ************************************ 00:08:50.510 04:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.510 04:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:50.510 04:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.510 04:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.510 ************************************ 00:08:50.510 START TEST bdev_write_zeroes 00:08:50.510 ************************************ 00:08:50.510 04:55:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.510 [2024-11-21 04:55:07.177727] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:50.510 [2024-11-21 04:55:07.177856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74082 ] 00:08:50.769 [2024-11-21 04:55:07.339059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.769 [2024-11-21 04:55:07.364353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.340 Running I/O for 1 seconds... 00:08:52.278 47025.00 IOPS, 183.69 MiB/s 00:08:52.278 Latency(us) 00:08:52.278 [2024-11-21T04:55:09.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.278 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.278 Nvme0n1 : 1.09 6176.27 24.13 0.00 0.00 20296.34 6553.60 146800.64 00:08:52.278 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.278 Nvme1n1p1 : 1.08 6439.62 25.15 0.00 0.00 19758.08 11342.77 128248.91 00:08:52.278 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.278 Nvme1n1p2 : 1.08 6374.60 24.90 0.00 0.00 19372.43 11594.83 128248.91 00:08:52.278 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.278 Nvme2n1 : 1.08 6351.18 24.81 0.00 0.00 19437.08 11393.18 141961.06 00:08:52.278 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.278 Nvme2n2 : 1.08 6343.88 24.78 0.00 0.00 19408.87 10233.70 141154.46 00:08:52.278 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.278 Nvme2n3 : 1.08 6336.97 24.75 0.00 0.00 19362.68 8721.33 136314.88 00:08:52.278 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.278 Nvme3n1 : 1.08 6270.73 24.50 0.00 0.00 19536.24 11746.07 136314.88 00:08:52.278 [2024-11-21T04:55:09.012Z] =================================================================================================================== 00:08:52.278 [2024-11-21T04:55:09.012Z] Total : 44293.26 173.02 0.00 0.00 19594.09 6553.60 146800.64 00:08:52.545 00:08:52.545 real 0m2.011s 00:08:52.545 user 0m1.689s 00:08:52.545 sys 0m0.208s 00:08:52.545 04:55:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.545 ************************************ 00:08:52.545 END TEST bdev_write_zeroes 00:08:52.545 ************************************ 00:08:52.545 04:55:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:52.545 04:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.545 04:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:52.545 04:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.545 04:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:52.545 ************************************ 00:08:52.545 START TEST bdev_json_nonenclosed 00:08:52.545 ************************************ 00:08:52.545 04:55:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.545 [2024-11-21 04:55:09.260514] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:52.545 [2024-11-21 04:55:09.260705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74124 ] 00:08:52.806 [2024-11-21 04:55:09.427075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.806 [2024-11-21 04:55:09.468473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.806 [2024-11-21 04:55:09.468595] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:52.806 [2024-11-21 04:55:09.468636] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:52.806 [2024-11-21 04:55:09.468652] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.068 00:08:53.068 real 0m0.377s 00:08:53.068 user 0m0.148s 00:08:53.068 sys 0m0.125s 00:08:53.068 04:55:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.068 ************************************ 00:08:53.068 END TEST bdev_json_nonenclosed 00:08:53.068 ************************************ 00:08:53.068 04:55:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:53.068 04:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.068 04:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:53.068 04:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.068 04:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.068 ************************************ 00:08:53.068 START TEST bdev_json_nonarray 00:08:53.068 ************************************ 00:08:53.068 04:55:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.068 [2024-11-21 04:55:09.707539] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:53.068 [2024-11-21 04:55:09.707712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74144 ] 00:08:53.331 [2024-11-21 04:55:09.872540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.331 [2024-11-21 04:55:09.913997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.331 [2024-11-21 04:55:09.914130] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:53.331 [2024-11-21 04:55:09.914151] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:53.331 [2024-11-21 04:55:09.914166] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.331 00:08:53.331 real 0m0.379s 00:08:53.331 user 0m0.154s 00:08:53.331 sys 0m0.121s 00:08:53.331 04:55:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.331 04:55:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:53.331 ************************************ 00:08:53.331 END TEST bdev_json_nonarray 00:08:53.331 ************************************ 00:08:53.331 04:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:53.331 04:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:53.331 04:55:10 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:53.331 04:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.331 04:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.331 04:55:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.592 ************************************ 00:08:53.592 START TEST bdev_gpt_uuid 00:08:53.592 ************************************ 00:08:53.592 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:53.592 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74169 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74169 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74169 ']' 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.593 04:55:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:53.593 [2024-11-21 04:55:10.164887] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:08:53.593 [2024-11-21 04:55:10.165051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74169 ] 00:08:53.855 [2024-11-21 04:55:10.327200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.855 [2024-11-21 04:55:10.368380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.428 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.428 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:54.428 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:54.428 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.428 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:54.689 Some configs were skipped because the RPC state that can call them passed over. 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.689 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:54.689 { 00:08:54.689 "name": "Nvme1n1p1", 00:08:54.689 "aliases": [ 00:08:54.689 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:54.689 ], 00:08:54.689 "product_name": "GPT Disk", 00:08:54.689 "block_size": 4096, 00:08:54.689 "num_blocks": 655104, 00:08:54.689 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:54.689 "assigned_rate_limits": { 00:08:54.689 "rw_ios_per_sec": 0, 00:08:54.689 "rw_mbytes_per_sec": 0, 00:08:54.689 "r_mbytes_per_sec": 0, 00:08:54.689 "w_mbytes_per_sec": 0 00:08:54.689 }, 00:08:54.690 "claimed": false, 00:08:54.690 "zoned": false, 00:08:54.690 "supported_io_types": { 00:08:54.690 "read": true, 00:08:54.690 "write": true, 00:08:54.690 "unmap": true, 00:08:54.690 "flush": true, 00:08:54.690 "reset": true, 00:08:54.690 "nvme_admin": false, 00:08:54.690 "nvme_io": false, 00:08:54.690 "nvme_io_md": false, 00:08:54.690 "write_zeroes": true, 00:08:54.690 "zcopy": false, 00:08:54.690 "get_zone_info": false, 00:08:54.690 "zone_management": false, 00:08:54.690 "zone_append": false, 00:08:54.690 "compare": true, 00:08:54.690 "compare_and_write": false, 00:08:54.690 "abort": true, 00:08:54.690 "seek_hole": false, 00:08:54.690 "seek_data": false, 00:08:54.690 "copy": true, 00:08:54.690 "nvme_iov_md": false 00:08:54.690 }, 00:08:54.690 "driver_specific": { 00:08:54.690 "gpt": { 00:08:54.690 "base_bdev": "Nvme1n1", 00:08:54.690 "offset_blocks": 256, 00:08:54.690 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:54.690 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:54.690 "partition_name": "SPDK_TEST_first" 00:08:54.690 } 00:08:54.690 } 00:08:54.690 } 00:08:54.690 ]' 00:08:54.690 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:54.690 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:54.690 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.951 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:54.951 { 00:08:54.951 "name": "Nvme1n1p2", 00:08:54.951 "aliases": [ 00:08:54.951 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:54.951 ], 00:08:54.951 "product_name": "GPT Disk", 00:08:54.951 "block_size": 4096, 00:08:54.951 "num_blocks": 655103, 00:08:54.951 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:54.951 "assigned_rate_limits": { 00:08:54.951 "rw_ios_per_sec": 0, 00:08:54.951 "rw_mbytes_per_sec": 0, 00:08:54.951 "r_mbytes_per_sec": 0, 00:08:54.951 "w_mbytes_per_sec": 0 00:08:54.951 }, 00:08:54.951 "claimed": false, 00:08:54.951 "zoned": false, 00:08:54.951 "supported_io_types": { 00:08:54.951 "read": true, 00:08:54.951 "write": true, 00:08:54.951 "unmap": true, 00:08:54.951 "flush": true, 00:08:54.951 "reset": true, 00:08:54.951 "nvme_admin": false, 00:08:54.951 "nvme_io": false, 00:08:54.951 "nvme_io_md": false, 00:08:54.951 "write_zeroes": true, 00:08:54.951 "zcopy": false, 00:08:54.951 "get_zone_info": false, 00:08:54.951 "zone_management": false, 00:08:54.951 "zone_append": false, 00:08:54.951 "compare": true, 00:08:54.951 "compare_and_write": false, 00:08:54.951 "abort": true, 00:08:54.952 "seek_hole": false, 00:08:54.952 "seek_data": false, 00:08:54.952 "copy": true, 00:08:54.952 "nvme_iov_md": false 00:08:54.952 }, 00:08:54.952 "driver_specific": { 00:08:54.952 "gpt": { 00:08:54.952 "base_bdev": "Nvme1n1", 00:08:54.952 "offset_blocks": 655360, 00:08:54.952 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:54.952 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:54.952 "partition_name": "SPDK_TEST_second" 00:08:54.952 } 00:08:54.952 } 00:08:54.952 } 00:08:54.952 ]' 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74169 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74169 ']' 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74169 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74169 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74169' 00:08:54.952 killing process with pid 74169 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74169 00:08:54.952 04:55:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74169 00:08:55.525 00:08:55.525 real 0m2.037s 00:08:55.525 user 0m2.024s 00:08:55.525 sys 0m0.557s 00:08:55.525 04:55:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.525 ************************************ 00:08:55.525 END TEST bdev_gpt_uuid 00:08:55.525 ************************************ 00:08:55.525 04:55:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:55.525 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:55.525 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:55.525 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:55.525 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:55.526 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.526 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:55.526 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:55.526 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:55.526 04:55:12 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:55.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:56.046 Waiting for block devices as requested 00:08:56.046 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.046 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.307 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.307 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.597 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:01.597 04:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:01.597 04:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:01.597 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:01.597 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:01.597 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:01.597 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:01.597 04:55:18 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:01.597 00:09:01.597 real 0m51.309s 00:09:01.597 user 1m4.046s 00:09:01.597 sys 0m9.002s 00:09:01.597 04:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.597 04:55:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:01.597 ************************************ 00:09:01.597 END TEST blockdev_nvme_gpt 00:09:01.597 ************************************ 00:09:01.859 04:55:18 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:01.859 04:55:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.859 04:55:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.859 04:55:18 -- common/autotest_common.sh@10 -- # set +x 00:09:01.859 ************************************ 00:09:01.859 START TEST nvme 00:09:01.859 ************************************ 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:01.859 * Looking for test storage... 00:09:01.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.859 04:55:18 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.859 04:55:18 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.859 04:55:18 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.859 04:55:18 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.859 04:55:18 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.859 04:55:18 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.859 04:55:18 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.859 04:55:18 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:01.859 04:55:18 nvme -- scripts/common.sh@345 -- # : 1 00:09:01.859 04:55:18 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.859 04:55:18 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.859 04:55:18 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:01.859 04:55:18 nvme -- scripts/common.sh@353 -- # local d=1 00:09:01.859 04:55:18 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.859 04:55:18 nvme -- scripts/common.sh@355 -- # echo 1 00:09:01.859 04:55:18 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.859 04:55:18 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@353 -- # local d=2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.859 04:55:18 nvme -- scripts/common.sh@355 -- # echo 2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.859 04:55:18 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.859 04:55:18 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.859 04:55:18 nvme -- scripts/common.sh@368 -- # return 0 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.859 --rc genhtml_branch_coverage=1 00:09:01.859 --rc genhtml_function_coverage=1 00:09:01.859 --rc genhtml_legend=1 00:09:01.859 --rc geninfo_all_blocks=1 00:09:01.859 --rc geninfo_unexecuted_blocks=1 00:09:01.859 00:09:01.859 ' 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.859 --rc genhtml_branch_coverage=1 00:09:01.859 --rc genhtml_function_coverage=1 00:09:01.859 --rc genhtml_legend=1 00:09:01.859 --rc geninfo_all_blocks=1 00:09:01.859 --rc geninfo_unexecuted_blocks=1 00:09:01.859 00:09:01.859 ' 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.859 --rc genhtml_branch_coverage=1 00:09:01.859 --rc genhtml_function_coverage=1 00:09:01.859 --rc genhtml_legend=1 00:09:01.859 --rc geninfo_all_blocks=1 00:09:01.859 --rc geninfo_unexecuted_blocks=1 00:09:01.859 00:09:01.859 ' 00:09:01.859 04:55:18 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.859 --rc genhtml_branch_coverage=1 00:09:01.859 --rc genhtml_function_coverage=1 00:09:01.859 --rc genhtml_legend=1 00:09:01.859 --rc geninfo_all_blocks=1 00:09:01.859 --rc geninfo_unexecuted_blocks=1 00:09:01.859 00:09:01.859 ' 00:09:01.859 04:55:18 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:02.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:03.003 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.003 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.003 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.003 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:03.003 04:55:19 nvme -- nvme/nvme.sh@79 -- # uname 00:09:03.003 04:55:19 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:03.003 04:55:19 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:03.003 04:55:19 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:03.003 Waiting for stub to ready for secondary processes... 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1075 -- # stubpid=74798 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74798 ]] 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:03.003 04:55:19 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:03.264 [2024-11-21 04:55:19.745054] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:09:03.264 [2024-11-21 04:55:19.745364] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:04.206 04:55:20 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:04.207 04:55:20 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74798 ]] 00:09:04.207 04:55:20 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:04.207 [2024-11-21 04:55:20.756806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.207 [2024-11-21 04:55:20.771997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.207 [2024-11-21 04:55:20.772319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.207 [2024-11-21 04:55:20.772362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.207 [2024-11-21 04:55:20.782778] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:04.207 [2024-11-21 04:55:20.782825] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:04.207 [2024-11-21 04:55:20.795332] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:04.207 [2024-11-21 04:55:20.795605] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:04.207 [2024-11-21 04:55:20.796756] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:04.207 [2024-11-21 04:55:20.797232] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:04.207 [2024-11-21 04:55:20.797363] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:04.207 [2024-11-21 04:55:20.798389] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:04.207 [2024-11-21 04:55:20.798882] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:04.207 [2024-11-21 04:55:20.799012] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:04.207 [2024-11-21 04:55:20.800802] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:04.207 [2024-11-21 04:55:20.801152] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:04.207 [2024-11-21 04:55:20.801285] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:04.207 [2024-11-21 04:55:20.801515] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:04.207 [2024-11-21 04:55:20.801672] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:05.149 04:55:21 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:05.149 04:55:21 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:05.149 done. 00:09:05.149 04:55:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:05.149 04:55:21 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:05.149 04:55:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.149 04:55:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.149 ************************************ 00:09:05.149 START TEST nvme_reset 00:09:05.149 ************************************ 00:09:05.149 04:55:21 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:05.410 Initializing NVMe Controllers 00:09:05.410 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:05.410 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:05.410 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:05.410 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:05.410 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:05.410 00:09:05.410 real 0m0.205s 00:09:05.410 user 0m0.070s 00:09:05.410 sys 0m0.090s 00:09:05.410 04:55:21 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.410 04:55:21 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:05.410 ************************************ 00:09:05.410 END TEST nvme_reset 00:09:05.410 ************************************ 00:09:05.410 04:55:21 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:05.410 04:55:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.410 04:55:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.410 04:55:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.410 ************************************ 00:09:05.410 START TEST nvme_identify 00:09:05.410 ************************************ 00:09:05.410 04:55:21 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:05.410 04:55:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:05.410 04:55:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:05.410 04:55:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:05.410 04:55:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:05.410 04:55:21 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:05.410 04:55:21 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:05.410 04:55:21 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:05.410 04:55:21 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:05.410 04:55:22 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:05.410 04:55:22 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:05.410 04:55:22 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:05.410 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:05.674 ===================================================== 00:09:05.674 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:05.674 ===================================================== 00:09:05.674 Controller Capabilities/Features 00:09:05.674 ================================ 00:09:05.674 Vendor ID: 1b36 00:09:05.674 Subsystem Vendor ID: 1af4 00:09:05.674 Serial Number: 12340 00:09:05.674 Model Number: QEMU NVMe Ctrl 00:09:05.674 Firmware Version: 8.0.0 00:09:05.674 Recommended Arb Burst: 6 00:09:05.674 IEEE OUI Identifier: 00 54 52 00:09:05.674 Multi-path I/O 00:09:05.674 May have multiple subsystem ports: No 00:09:05.674 May have multiple controllers: No 00:09:05.674 Associated with SR-IOV VF: No 00:09:05.674 Max Data Transfer Size: 524288 00:09:05.674 Max Number of Namespaces: 256 00:09:05.674 Max Number of I/O Queues: 64 00:09:05.674 NVMe Specification Version (VS): 1.4 00:09:05.674 NVMe Specification Version (Identify): 1.4 00:09:05.674 Maximum Queue Entries: 2048 00:09:05.674 Contiguous Queues Required: Yes 00:09:05.674 Arbitration Mechanisms Supported 00:09:05.674 Weighted Round Robin: Not Supported 00:09:05.674 Vendor Specific: Not Supported 00:09:05.674 Reset Timeout: 7500 ms 00:09:05.674 Doorbell Stride: 4 bytes 00:09:05.674 NVM Subsystem Reset: Not Supported 00:09:05.674 Command Sets Supported 00:09:05.674 NVM Command Set: Supported 00:09:05.674 Boot Partition: Not Supported 00:09:05.674 Memory Page Size Minimum: 4096 bytes 00:09:05.674 Memory Page Size Maximum: 65536 bytes 00:09:05.674 Persistent Memory Region: Not Supported 00:09:05.674 Optional Asynchronous Events Supported 00:09:05.674 Namespace Attribute Notices: Supported 00:09:05.674 Firmware Activation Notices: Not Supported 00:09:05.674 ANA Change Notices: Not Supported 00:09:05.674 PLE Aggregate Log Change Notices: Not Supported 00:09:05.674 LBA Status Info Alert Notices: Not Supported 00:09:05.674 EGE Aggregate Log Change Notices: Not Supported 00:09:05.674 Normal NVM Subsystem Shutdown event: Not Supported 00:09:05.674 Zone Descriptor Change Notices: Not Supported 00:09:05.674 Discovery Log Change Notices: Not Supported 00:09:05.674 Controller Attributes 00:09:05.674 128-bit Host Identifier: Not Supported 00:09:05.674 Non-Operational Permissive Mode: Not Supported 00:09:05.674 NVM Sets: Not Supported 00:09:05.674 Read Recovery Levels: Not Supported 00:09:05.674 Endurance Groups: Not Supported 00:09:05.674 Predictable Latency Mode: Not Supported 00:09:05.674 Traffic Based Keep ALive: Not Supported 00:09:05.674 Namespace Granularity: Not Supported 00:09:05.674 SQ Associations: Not Supported 00:09:05.674 UUID List: Not Supported 00:09:05.674 Multi-Domain Subsystem: Not Supported 00:09:05.674 Fixed Capacity Management: Not Supported 00:09:05.674 Variable Capacity Management: Not Supported 00:09:05.674 Delete Endurance Group: Not Supported 00:09:05.674 Delete NVM Set: Not Supported 00:09:05.674 Extended LBA Formats Supported: Supported 00:09:05.674 Flexible Data Placement Supported: Not Supported 00:09:05.674 00:09:05.674 Controller Memory Buffer Support 00:09:05.674 ================================ 00:09:05.674 Supported: No 00:09:05.674 00:09:05.674 Persistent Memory Region Support 00:09:05.674 ================================ 00:09:05.674 Supported: No 00:09:05.674 00:09:05.674 Admin Command Set Attributes 00:09:05.674 ============================ 00:09:05.674 Security Send/Receive: Not Supported 00:09:05.674 Format NVM: Supported 00:09:05.674 Firmware Activate/Download: Not Supported 00:09:05.674 Namespace Management: Supported 00:09:05.674 Device Self-Test: Not Supported 00:09:05.674 Directives: Supported 00:09:05.674 NVMe-MI: Not Supported 00:09:05.674 Virtualization Management: Not Supported 00:09:05.674 Doorbell Buffer Config: Supported 00:09:05.674 Get LBA Status Capability: Not Supported 00:09:05.674 Command & Feature Lockdown Capability: Not Supported 00:09:05.674 Abort Command Limit: 4 00:09:05.674 Async Event Request Limit: 4 00:09:05.674 Number of Firmware Slots: N/A 00:09:05.674 Firmware Slot 1 Read-Only: N/A 00:09:05.674 Firmware Activation Without Reset: N/A 00:09:05.674 Multiple Update Detection Support: N/A 00:09:05.674 Firmware Update Granularity: No Information Provided 00:09:05.674 Per-Namespace SMART Log: Yes 00:09:05.674 Asymmetric Namespace Access Log Page: Not Supported 00:09:05.674 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:05.674 Command Effects Log Page: Supported 00:09:05.674 Get Log Page Extended Data: Supported 00:09:05.674 Telemetry Log Pages: Not Supported 00:09:05.674 Persistent Event Log Pages: Not Supported 00:09:05.674 Supported Log Pages Log Page: May Support 00:09:05.674 Commands Supported & Effects Log Page: Not Supported 00:09:05.674 Feature Identifiers & Effects Log Page:May Support 00:09:05.674 NVMe-MI Commands & Effects Log Page: May Support 00:09:05.674 Data Area 4 for Telemetry Log: Not Supported 00:09:05.674 Error Log Page Entries Supported: 1 00:09:05.674 Keep Alive: Not Supported 00:09:05.674 00:09:05.674 NVM Command Set Attributes 00:09:05.674 ========================== 00:09:05.674 Submission Queue Entry Size 00:09:05.674 Max: 64 00:09:05.674 Min: 64 00:09:05.674 Completion Queue Entry Size 00:09:05.674 Max: 16 00:09:05.674 Min: 16 00:09:05.674 Number of Namespaces: 256 00:09:05.674 Compare Command: Supported 00:09:05.674 Write Uncorrectable Command: Not Supported 00:09:05.674 Dataset Management Command: Supported 00:09:05.674 Write Zeroes Command: Supported 00:09:05.674 Set Features Save Field: Supported 00:09:05.674 Reservations: Not Supported 00:09:05.674 Timestamp: Supported 00:09:05.674 Copy: Supported 00:09:05.674 Volatile Write Cache: Present 00:09:05.674 Atomic Write Unit (Normal): 1 00:09:05.674 Atomic Write Unit (PFail): 1 00:09:05.674 Atomic Compare & Write Unit: 1 00:09:05.674 Fused Compare & Write: Not Supported 00:09:05.674 Scatter-Gather List 00:09:05.674 SGL Command Set: Supported 00:09:05.674 SGL Keyed: Not Supported 00:09:05.674 SGL Bit Bucket Descriptor: Not Supported 00:09:05.674 SGL Metadata Pointer: Not Supported 00:09:05.674 Oversized SGL: Not Supported 00:09:05.674 SGL Metadata Address: Not Supported 00:09:05.674 SGL Offset: Not Supported 00:09:05.674 Transport SGL Data Block: Not Supported 00:09:05.674 Replay Protected Memory Block: Not Supported 00:09:05.674 00:09:05.674 Firmware Slot Information 00:09:05.675 ========================= 00:09:05.675 Active slot: 1 00:09:05.675 Slot 1 Firmware Revision: 1.0 00:09:05.675 00:09:05.675 00:09:05.675 Commands Supported and Effects 00:09:05.675 ============================== 00:09:05.675 Admin Commands 00:09:05.675 -------------- 00:09:05.675 Delete I/O Submission Queue (00h): Supported 00:09:05.675 Create I/O Submission Queue (01h): Supported 00:09:05.675 Get Log Page (02h): Supported 00:09:05.675 Delete I/O Completion Queue (04h): Supported 00:09:05.675 Create I/O Completion Queue (05h): Supported 00:09:05.675 Identify (06h): Supported 00:09:05.675 Abort (08h): Supported 00:09:05.675 Set Features (09h): Supported 00:09:05.675 Get Features (0Ah): Supported 00:09:05.675 Asynchronous Event Request (0Ch): Supported 00:09:05.675 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:05.675 Directive Send (19h): Supported 00:09:05.675 Directive Receive (1Ah): Supported 00:09:05.675 Virtualization Management (1Ch): Supported 00:09:05.675 Doorbell Buffer Config (7Ch): Supported 00:09:05.675 Format NVM (80h): Supported LBA-Change 00:09:05.675 I/O Commands 00:09:05.675 ------------ 00:09:05.675 Flush (00h): Supported LBA-Change 00:09:05.675 Write (01h): Supported LBA-Change 00:09:05.675 Read (02h): Supported 00:09:05.675 Compare (05h): Supported 00:09:05.675 Write Zeroes (08h): Supported LBA-Change 00:09:05.675 Dataset Management (09h): Supported LBA-Change 00:09:05.675 Unknown (0Ch): Supported 00:09:05.675 Unknown (12h): Supported 00:09:05.675 Copy (19h): Supported LBA-Change 00:09:05.675 Unknown (1Dh): Supported LBA-Change 00:09:05.675 00:09:05.675 Error Log 00:09:05.675 ========= 00:09:05.675 00:09:05.675 Arbitration 00:09:05.675 =========== 00:09:05.675 Arbitration Burst: no limit 00:09:05.675 00:09:05.675 Power Management 00:09:05.675 ================ 00:09:05.675 Number of Power States: 1 00:09:05.675 Current Power State: Power State #0 00:09:05.675 Power State #0: 00:09:05.675 Max Power: 25.00 W 00:09:05.675 Non-Operational State: Operational 00:09:05.675 Entry Latency: 16 microseconds 00:09:05.675 Exit Latency: 4 microseconds 00:09:05.675 Relative Read Throughput: 0 00:09:05.675 Relative Read Latency: 0 00:09:05.675 Relative Write Throughput: 0 00:09:05.675 Relative Write Latency: 0 00:09:05.675 Idle Power: Not Reported 00:09:05.675 Active Power: Not Reported 00:09:05.675 Non-Operational Permissive Mode: Not Supported 00:09:05.675 00:09:05.675 Health Information 00:09:05.675 ================== 00:09:05.675 Critical Warnings: 00:09:05.675 Available Spare Space: OK 00:09:05.675 Temperature: OK 00:09:05.675 Device Reliability: OK 00:09:05.675 Read Only: No 00:09:05.675 Volatile Memory Backup: OK 00:09:05.675 Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.675 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:05.675 Available Spare: 0% 00:09:05.675 Available Spare Threshold: 0% 00:09:05.675 Life Percentage Used: 0% 00:09:05.675 Data Units Read: 691 00:09:05.675 Data Units Written: 619 00:09:05.675 Host Read Commands: 35174 00:09:05.675 Host Write Commands: 34960 00:09:05.675 Controller Busy Time: 0 minutes 00:09:05.675 Power Cycles: 0 00:09:05.675 Power On Hours: 0 hours 00:09:05.675 Unsafe Shutdowns: 0 00:09:05.675 Unrecoverable Media Errors: 0 00:09:05.675 Lifetime Error Log Entries: 0 00:09:05.675 Warning Temperature Time: 0 minutes 00:09:05.675 Critical Temperature Time: 0 minutes 00:09:05.675 00:09:05.675 Number of Queues 00:09:05.675 ================ 00:09:05.675 Number of I/O Submission Queues: 64 00:09:05.675 Number of I/O Completion Queues: 64 00:09:05.675 00:09:05.675 ZNS Specific Controller Data 00:09:05.675 ============================ 00:09:05.675 Zone Append Size Limit: 0 00:09:05.675 00:09:05.675 00:09:05.675 Active Namespaces 00:09:05.675 ================= 00:09:05.675 Namespace ID:1 00:09:05.675 Error Recovery Timeout: Unlimited 00:09:05.675 Command Set Identifier: NVM (00h) 00:09:05.675 Deallocate: Supported 00:09:05.675 Deallocated/Unwritten Error: Supported 00:09:05.675 Deallocated Read Value: All 0x00 00:09:05.675 Deallocate in Write Zeroes: Not Supported 00:09:05.675 Deallocated Guard Field: 0xFFFF 00:09:05.675 Flush: Supported 00:09:05.675 Reservation: Not Supported 00:09:05.675 Metadata Transferred as: Separate Metadata Buffer 00:09:05.675 Namespace Sharing Capabilities: Private 00:09:05.675 Size (in LBAs): 1548666 (5GiB) 00:09:05.675 Capacity (in LBAs): 1548666 (5GiB) 00:09:05.675 Utilization (in LBAs): 1548666 (5GiB) 00:09:05.675 Thin Provisioning: Not Supported 00:09:05.675 Per-NS Atomic Units: No 00:09:05.675 Maximum Single Source Range Length: 128 00:09:05.675 Maximum Copy Length: 128 00:09:05.675 Maximum Source Range Count: 128 00:09:05.675 NGUID/EUI64 Never Reused: No 00:09:05.675 Namespace Write Protected: No 00:09:05.675 Number of LBA Formats: 8 00:09:05.675 Current LBA Format: LBA Format #07 00:09:05.675 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:05.675 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:05.675 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:05.675 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:05.675 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:05.675 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:05.675 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:05.675 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:05.675 00:09:05.675 NVM Specific Namespace Data 00:09:05.675 =========================== 00:09:05.675 Logical Block Storage Tag Mask: 0 00:09:05.675 Protection Information Capabilities: 00:09:05.675 16b Guard Protection Information Storage Tag Support: No 00:09:05.675 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:05.675 Storage Tag Check Read Support: No 00:09:05.675 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.675 ===================================================== 00:09:05.675 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:05.675 ===================================================== 00:09:05.675 Controller Capabilities/Features 00:09:05.675 ================================ 00:09:05.675 Vendor ID: [2024-11-21 04:55:22.235098] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74832 terminated unexpected 00:09:05.675 [2024-11-21 04:55:22.235814] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74832 terminated unexpected 00:09:05.675 1b36 00:09:05.675 Subsystem Vendor ID: 1af4 00:09:05.675 Serial Number: 12341 00:09:05.675 Model Number: QEMU NVMe Ctrl 00:09:05.675 Firmware Version: 8.0.0 00:09:05.675 Recommended Arb Burst: 6 00:09:05.675 IEEE OUI Identifier: 00 54 52 00:09:05.675 Multi-path I/O 00:09:05.675 May have multiple subsystem ports: No 00:09:05.675 May have multiple controllers: No 00:09:05.675 Associated with SR-IOV VF: No 00:09:05.675 Max Data Transfer Size: 524288 00:09:05.675 Max Number of Namespaces: 256 00:09:05.675 Max Number of I/O Queues: 64 00:09:05.675 NVMe Specification Version (VS): 1.4 00:09:05.675 NVMe Specification Version (Identify): 1.4 00:09:05.675 Maximum Queue Entries: 2048 00:09:05.675 Contiguous Queues Required: Yes 00:09:05.675 Arbitration Mechanisms Supported 00:09:05.675 Weighted Round Robin: Not Supported 00:09:05.675 Vendor Specific: Not Supported 00:09:05.675 Reset Timeout: 7500 ms 00:09:05.675 Doorbell Stride: 4 bytes 00:09:05.675 NVM Subsystem Reset: Not Supported 00:09:05.675 Command Sets Supported 00:09:05.675 NVM Command Set: Supported 00:09:05.675 Boot Partition: Not Supported 00:09:05.675 Memory Page Size Minimum: 4096 bytes 00:09:05.675 Memory Page Size Maximum: 65536 bytes 00:09:05.675 Persistent Memory Region: Not Supported 00:09:05.675 Optional Asynchronous Events Supported 00:09:05.675 Namespace Attribute Notices: Supported 00:09:05.675 Firmware Activation Notices: Not Supported 00:09:05.675 ANA Change Notices: Not Supported 00:09:05.675 PLE Aggregate Log Change Notices: Not Supported 00:09:05.675 LBA Status Info Alert Notices: Not Supported 00:09:05.675 EGE Aggregate Log Change Notices: Not Supported 00:09:05.675 Normal NVM Subsystem Shutdown event: Not Supported 00:09:05.675 Zone Descriptor Change Notices: Not Supported 00:09:05.675 Discovery Log Change Notices: Not Supported 00:09:05.675 Controller Attributes 00:09:05.675 128-bit Host Identifier: Not Supported 00:09:05.675 Non-Operational Permissive Mode: Not Supported 00:09:05.675 NVM Sets: Not Supported 00:09:05.676 Read Recovery Levels: Not Supported 00:09:05.676 Endurance Groups: Not Supported 00:09:05.676 Predictable Latency Mode: Not Supported 00:09:05.676 Traffic Based Keep ALive: Not Supported 00:09:05.676 Namespace Granularity: Not Supported 00:09:05.676 SQ Associations: Not Supported 00:09:05.676 UUID List: Not Supported 00:09:05.676 Multi-Domain Subsystem: Not Supported 00:09:05.676 Fixed Capacity Management: Not Supported 00:09:05.676 Variable Capacity Management: Not Supported 00:09:05.676 Delete Endurance Group: Not Supported 00:09:05.676 Delete NVM Set: Not Supported 00:09:05.676 Extended LBA Formats Supported: Supported 00:09:05.676 Flexible Data Placement Supported: Not Supported 00:09:05.676 00:09:05.676 Controller Memory Buffer Support 00:09:05.676 ================================ 00:09:05.676 Supported: No 00:09:05.676 00:09:05.676 Persistent Memory Region Support 00:09:05.676 ================================ 00:09:05.676 Supported: No 00:09:05.676 00:09:05.676 Admin Command Set Attributes 00:09:05.676 ============================ 00:09:05.676 Security Send/Receive: Not Supported 00:09:05.676 Format NVM: Supported 00:09:05.676 Firmware Activate/Download: Not Supported 00:09:05.676 Namespace Management: Supported 00:09:05.676 Device Self-Test: Not Supported 00:09:05.676 Directives: Supported 00:09:05.676 NVMe-MI: Not Supported 00:09:05.676 Virtualization Management: Not Supported 00:09:05.676 Doorbell Buffer Config: Supported 00:09:05.676 Get LBA Status Capability: Not Supported 00:09:05.676 Command & Feature Lockdown Capability: Not Supported 00:09:05.676 Abort Command Limit: 4 00:09:05.676 Async Event Request Limit: 4 00:09:05.676 Number of Firmware Slots: N/A 00:09:05.676 Firmware Slot 1 Read-Only: N/A 00:09:05.676 Firmware Activation Without Reset: N/A 00:09:05.676 Multiple Update Detection Support: N/A 00:09:05.676 Firmware Update Granularity: No Information Provided 00:09:05.676 Per-Namespace SMART Log: Yes 00:09:05.676 Asymmetric Namespace Access Log Page: Not Supported 00:09:05.676 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:05.676 Command Effects Log Page: Supported 00:09:05.676 Get Log Page Extended Data: Supported 00:09:05.676 Telemetry Log Pages: Not Supported 00:09:05.676 Persistent Event Log Pages: Not Supported 00:09:05.676 Supported Log Pages Log Page: May Support 00:09:05.676 Commands Supported & Effects Log Page: Not Supported 00:09:05.676 Feature Identifiers & Effects Log Page:May Support 00:09:05.676 NVMe-MI Commands & Effects Log Page: May Support 00:09:05.676 Data Area 4 for Telemetry Log: Not Supported 00:09:05.676 Error Log Page Entries Supported: 1 00:09:05.676 Keep Alive: Not Supported 00:09:05.676 00:09:05.676 NVM Command Set Attributes 00:09:05.676 ========================== 00:09:05.676 Submission Queue Entry Size 00:09:05.676 Max: 64 00:09:05.676 Min: 64 00:09:05.676 Completion Queue Entry Size 00:09:05.676 Max: 16 00:09:05.676 Min: 16 00:09:05.676 Number of Namespaces: 256 00:09:05.676 Compare Command: Supported 00:09:05.676 Write Uncorrectable Command: Not Supported 00:09:05.676 Dataset Management Command: Supported 00:09:05.676 Write Zeroes Command: Supported 00:09:05.676 Set Features Save Field: Supported 00:09:05.676 Reservations: Not Supported 00:09:05.676 Timestamp: Supported 00:09:05.676 Copy: Supported 00:09:05.676 Volatile Write Cache: Present 00:09:05.676 Atomic Write Unit (Normal): 1 00:09:05.676 Atomic Write Unit (PFail): 1 00:09:05.676 Atomic Compare & Write Unit: 1 00:09:05.676 Fused Compare & Write: Not Supported 00:09:05.676 Scatter-Gather List 00:09:05.676 SGL Command Set: Supported 00:09:05.676 SGL Keyed: Not Supported 00:09:05.676 SGL Bit Bucket Descriptor: Not Supported 00:09:05.676 SGL Metadata Pointer: Not Supported 00:09:05.676 Oversized SGL: Not Supported 00:09:05.676 SGL Metadata Address: Not Supported 00:09:05.676 SGL Offset: Not Supported 00:09:05.676 Transport SGL Data Block: Not Supported 00:09:05.676 Replay Protected Memory Block: Not Supported 00:09:05.676 00:09:05.676 Firmware Slot Information 00:09:05.676 ========================= 00:09:05.676 Active slot: 1 00:09:05.676 Slot 1 Firmware Revision: 1.0 00:09:05.676 00:09:05.676 00:09:05.676 Commands Supported and Effects 00:09:05.676 ============================== 00:09:05.676 Admin Commands 00:09:05.676 -------------- 00:09:05.676 Delete I/O Submission Queue (00h): Supported 00:09:05.676 Create I/O Submission Queue (01h): Supported 00:09:05.676 Get Log Page (02h): Supported 00:09:05.676 Delete I/O Completion Queue (04h): Supported 00:09:05.676 Create I/O Completion Queue (05h): Supported 00:09:05.676 Identify (06h): Supported 00:09:05.676 Abort (08h): Supported 00:09:05.676 Set Features (09h): Supported 00:09:05.676 Get Features (0Ah): Supported 00:09:05.676 Asynchronous Event Request (0Ch): Supported 00:09:05.676 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:05.676 Directive Send (19h): Supported 00:09:05.676 Directive Receive (1Ah): Supported 00:09:05.676 Virtualization Management (1Ch): Supported 00:09:05.676 Doorbell Buffer Config (7Ch): Supported 00:09:05.676 Format NVM (80h): Supported LBA-Change 00:09:05.676 I/O Commands 00:09:05.676 ------------ 00:09:05.676 Flush (00h): Supported LBA-Change 00:09:05.676 Write (01h): Supported LBA-Change 00:09:05.676 Read (02h): Supported 00:09:05.676 Compare (05h): Supported 00:09:05.676 Write Zeroes (08h): Supported LBA-Change 00:09:05.676 Dataset Management (09h): Supported LBA-Change 00:09:05.676 Unknown (0Ch): Supported 00:09:05.676 Unknown (12h): Supported 00:09:05.676 Copy (19h): Supported LBA-Change 00:09:05.676 Unknown (1Dh): Supported LBA-Change 00:09:05.676 00:09:05.676 Error Log 00:09:05.676 ========= 00:09:05.676 00:09:05.676 Arbitration 00:09:05.676 =========== 00:09:05.676 Arbitration Burst: no limit 00:09:05.676 00:09:05.676 Power Management 00:09:05.676 ================ 00:09:05.676 Number of Power States: 1 00:09:05.676 Current Power State: Power State #0 00:09:05.676 Power State #0: 00:09:05.676 Max Power: 25.00 W 00:09:05.676 Non-Operational State: Operational 00:09:05.676 Entry Latency: 16 microseconds 00:09:05.676 Exit Latency: 4 microseconds 00:09:05.676 Relative Read Throughput: 0 00:09:05.676 Relative Read Latency: 0 00:09:05.676 Relative Write Throughput: 0 00:09:05.676 Relative Write Latency: 0 00:09:05.676 Idle Power: Not Reported 00:09:05.676 Active Power: Not Reported 00:09:05.676 Non-Operational Permissive Mode: Not Supported 00:09:05.676 00:09:05.676 Health Information 00:09:05.676 ================== 00:09:05.676 Critical Warnings: 00:09:05.676 Available Spare Space: OK 00:09:05.676 Temperature: [2024-11-21 04:55:22.236295] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74832 terminated unexpected 00:09:05.676 OK 00:09:05.676 Device Reliability: OK 00:09:05.676 Read Only: No 00:09:05.676 Volatile Memory Backup: OK 00:09:05.676 Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.676 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:05.676 Available Spare: 0% 00:09:05.676 Available Spare Threshold: 0% 00:09:05.676 Life Percentage Used: 0% 00:09:05.676 Data Units Read: 1058 00:09:05.676 Data Units Written: 918 00:09:05.676 Host Read Commands: 53243 00:09:05.676 Host Write Commands: 51945 00:09:05.676 Controller Busy Time: 0 minutes 00:09:05.676 Power Cycles: 0 00:09:05.676 Power On Hours: 0 hours 00:09:05.676 Unsafe Shutdowns: 0 00:09:05.676 Unrecoverable Media Errors: 0 00:09:05.676 Lifetime Error Log Entries: 0 00:09:05.676 Warning Temperature Time: 0 minutes 00:09:05.676 Critical Temperature Time: 0 minutes 00:09:05.676 00:09:05.676 Number of Queues 00:09:05.676 ================ 00:09:05.676 Number of I/O Submission Queues: 64 00:09:05.676 Number of I/O Completion Queues: 64 00:09:05.676 00:09:05.676 ZNS Specific Controller Data 00:09:05.676 ============================ 00:09:05.676 Zone Append Size Limit: 0 00:09:05.676 00:09:05.676 00:09:05.676 Active Namespaces 00:09:05.676 ================= 00:09:05.676 Namespace ID:1 00:09:05.676 Error Recovery Timeout: Unlimited 00:09:05.676 Command Set Identifier: NVM (00h) 00:09:05.676 Deallocate: Supported 00:09:05.676 Deallocated/Unwritten Error: Supported 00:09:05.676 Deallocated Read Value: All 0x00 00:09:05.676 Deallocate in Write Zeroes: Not Supported 00:09:05.676 Deallocated Guard Field: 0xFFFF 00:09:05.676 Flush: Supported 00:09:05.676 Reservation: Not Supported 00:09:05.676 Namespace Sharing Capabilities: Private 00:09:05.676 Size (in LBAs): 1310720 (5GiB) 00:09:05.676 Capacity (in LBAs): 1310720 (5GiB) 00:09:05.676 Utilization (in LBAs): 1310720 (5GiB) 00:09:05.676 Thin Provisioning: Not Supported 00:09:05.676 Per-NS Atomic Units: No 00:09:05.676 Maximum Single Source Range Length: 128 00:09:05.677 Maximum Copy Length: 128 00:09:05.677 Maximum Source Range Count: 128 00:09:05.677 NGUID/EUI64 Never Reused: No 00:09:05.677 Namespace Write Protected: No 00:09:05.677 Number of LBA Formats: 8 00:09:05.677 Current LBA Format: LBA Format #04 00:09:05.677 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:05.677 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:05.677 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:05.677 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:05.677 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:05.677 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:05.677 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:05.677 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:05.677 00:09:05.677 NVM Specific Namespace Data 00:09:05.677 =========================== 00:09:05.677 Logical Block Storage Tag Mask: 0 00:09:05.677 Protection Information Capabilities: 00:09:05.677 16b Guard Protection Information Storage Tag Support: No 00:09:05.677 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:05.677 Storage Tag Check Read Support: No 00:09:05.677 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.677 ===================================================== 00:09:05.677 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:05.677 ===================================================== 00:09:05.677 Controller Capabilities/Features 00:09:05.677 ================================ 00:09:05.677 Vendor ID: 1b36 00:09:05.677 Subsystem Vendor ID: 1af4 00:09:05.677 Serial Number: 12343 00:09:05.677 Model Number: QEMU NVMe Ctrl 00:09:05.677 Firmware Version: 8.0.0 00:09:05.677 Recommended Arb Burst: 6 00:09:05.677 IEEE OUI Identifier: 00 54 52 00:09:05.677 Multi-path I/O 00:09:05.677 May have multiple subsystem ports: No 00:09:05.677 May have multiple controllers: Yes 00:09:05.677 Associated with SR-IOV VF: No 00:09:05.677 Max Data Transfer Size: 524288 00:09:05.677 Max Number of Namespaces: 256 00:09:05.677 Max Number of I/O Queues: 64 00:09:05.677 NVMe Specification Version (VS): 1.4 00:09:05.677 NVMe Specification Version (Identify): 1.4 00:09:05.677 Maximum Queue Entries: 2048 00:09:05.677 Contiguous Queues Required: Yes 00:09:05.677 Arbitration Mechanisms Supported 00:09:05.677 Weighted Round Robin: Not Supported 00:09:05.677 Vendor Specific: Not Supported 00:09:05.677 Reset Timeout: 7500 ms 00:09:05.677 Doorbell Stride: 4 bytes 00:09:05.677 NVM Subsystem Reset: Not Supported 00:09:05.677 Command Sets Supported 00:09:05.677 NVM Command Set: Supported 00:09:05.677 Boot Partition: Not Supported 00:09:05.677 Memory Page Size Minimum: 4096 bytes 00:09:05.677 Memory Page Size Maximum: 65536 bytes 00:09:05.677 Persistent Memory Region: Not Supported 00:09:05.677 Optional Asynchronous Events Supported 00:09:05.677 Namespace Attribute Notices: Supported 00:09:05.677 Firmware Activation Notices: Not Supported 00:09:05.677 ANA Change Notices: Not Supported 00:09:05.677 PLE Aggregate Log Change Notices: Not Supported 00:09:05.677 LBA Status Info Alert Notices: Not Supported 00:09:05.677 EGE Aggregate Log Change Notices: Not Supported 00:09:05.677 Normal NVM Subsystem Shutdown event: Not Supported 00:09:05.677 Zone Descriptor Change Notices: Not Supported 00:09:05.677 Discovery Log Change Notices: Not Supported 00:09:05.677 Controller Attributes 00:09:05.677 128-bit Host Identifier: Not Supported 00:09:05.677 Non-Operational Permissive Mode: Not Supported 00:09:05.677 NVM Sets: Not Supported 00:09:05.677 Read Recovery Levels: Not Supported 00:09:05.677 Endurance Groups: Supported 00:09:05.677 Predictable Latency Mode: Not Supported 00:09:05.677 Traffic Based Keep ALive: Not Supported 00:09:05.677 Namespace Granularity: Not Supported 00:09:05.677 SQ Associations: Not Supported 00:09:05.677 UUID List: Not Supported 00:09:05.677 Multi-Domain Subsystem: Not Supported 00:09:05.677 Fixed Capacity Management: Not Supported 00:09:05.677 Variable Capacity Management: Not Supported 00:09:05.677 Delete Endurance Group: Not Supported 00:09:05.677 Delete NVM Set: Not Supported 00:09:05.677 Extended LBA Formats Supported: Supported 00:09:05.677 Flexible Data Placement Supported: Supported 00:09:05.677 00:09:05.677 Controller Memory Buffer Support 00:09:05.677 ================================ 00:09:05.677 Supported: No 00:09:05.677 00:09:05.677 Persistent Memory Region Support 00:09:05.677 ================================ 00:09:05.677 Supported: No 00:09:05.677 00:09:05.677 Admin Command Set Attributes 00:09:05.677 ============================ 00:09:05.677 Security Send/Receive: Not Supported 00:09:05.677 Format NVM: Supported 00:09:05.677 Firmware Activate/Download: Not Supported 00:09:05.677 Namespace Management: Supported 00:09:05.677 Device Self-Test: Not Supported 00:09:05.677 Directives: Supported 00:09:05.677 NVMe-MI: Not Supported 00:09:05.677 Virtualization Management: Not Supported 00:09:05.677 Doorbell Buffer Config: Supported 00:09:05.677 Get LBA Status Capability: Not Supported 00:09:05.677 Command & Feature Lockdown Capability: Not Supported 00:09:05.677 Abort Command Limit: 4 00:09:05.677 Async Event Request Limit: 4 00:09:05.677 Number of Firmware Slots: N/A 00:09:05.677 Firmware Slot 1 Read-Only: N/A 00:09:05.677 Firmware Activation Without Reset: N/A 00:09:05.677 Multiple Update Detection Support: N/A 00:09:05.677 Firmware Update Granularity: No Information Provided 00:09:05.677 Per-Namespace SMART Log: Yes 00:09:05.677 Asymmetric Namespace Access Log Page: Not Supported 00:09:05.677 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:05.677 Command Effects Log Page: Supported 00:09:05.677 Get Log Page Extended Data: Supported 00:09:05.677 Telemetry Log Pages: Not Supported 00:09:05.677 Persistent Event Log Pages: Not Supported 00:09:05.677 Supported Log Pages Log Page: May Support 00:09:05.677 Commands Supported & Effects Log Page: Not Supported 00:09:05.677 Feature Identifiers & Effects Log Page:May Support 00:09:05.677 NVMe-MI Commands & Effects Log Page: May Support 00:09:05.677 Data Area 4 for Telemetry Log: Not Supported 00:09:05.677 Error Log Page Entries Supported: 1 00:09:05.677 Keep Alive: Not Supported 00:09:05.677 00:09:05.677 NVM Command Set Attributes 00:09:05.677 ========================== 00:09:05.677 Submission Queue Entry Size 00:09:05.677 Max: 64 00:09:05.677 Min: 64 00:09:05.677 Completion Queue Entry Size 00:09:05.677 Max: 16 00:09:05.677 Min: 16 00:09:05.677 Number of Namespaces: 256 00:09:05.677 Compare Command: Supported 00:09:05.677 Write Uncorrectable Command: Not Supported 00:09:05.677 Dataset Management Command: Supported 00:09:05.677 Write Zeroes Command: Supported 00:09:05.677 Set Features Save Field: Supported 00:09:05.677 Reservations: Not Supported 00:09:05.677 Timestamp: Supported 00:09:05.677 Copy: Supported 00:09:05.677 Volatile Write Cache: Present 00:09:05.677 Atomic Write Unit (Normal): 1 00:09:05.677 Atomic Write Unit (PFail): 1 00:09:05.677 Atomic Compare & Write Unit: 1 00:09:05.677 Fused Compare & Write: Not Supported 00:09:05.677 Scatter-Gather List 00:09:05.677 SGL Command Set: Supported 00:09:05.677 SGL Keyed: Not Supported 00:09:05.677 SGL Bit Bucket Descriptor: Not Supported 00:09:05.677 SGL Metadata Pointer: Not Supported 00:09:05.677 Oversized SGL: Not Supported 00:09:05.677 SGL Metadata Address: Not Supported 00:09:05.677 SGL Offset: Not Supported 00:09:05.677 Transport SGL Data Block: Not Supported 00:09:05.677 Replay Protected Memory Block: Not Supported 00:09:05.677 00:09:05.677 Firmware Slot Information 00:09:05.677 ========================= 00:09:05.677 Active slot: 1 00:09:05.677 Slot 1 Firmware Revision: 1.0 00:09:05.677 00:09:05.677 00:09:05.677 Commands Supported and Effects 00:09:05.677 ============================== 00:09:05.677 Admin Commands 00:09:05.677 -------------- 00:09:05.677 Delete I/O Submission Queue (00h): Supported 00:09:05.677 Create I/O Submission Queue (01h): Supported 00:09:05.677 Get Log Page (02h): Supported 00:09:05.677 Delete I/O Completion Queue (04h): Supported 00:09:05.677 Create I/O Completion Queue (05h): Supported 00:09:05.677 Identify (06h): Supported 00:09:05.677 Abort (08h): Supported 00:09:05.677 Set Features (09h): Supported 00:09:05.677 Get Features (0Ah): Supported 00:09:05.677 Asynchronous Event Request (0Ch): Supported 00:09:05.677 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:05.677 Directive Send (19h): Supported 00:09:05.677 Directive Receive (1Ah): Supported 00:09:05.677 Virtualization Management (1Ch): Supported 00:09:05.677 Doorbell Buffer Config (7Ch): Supported 00:09:05.677 Format NVM (80h): Supported LBA-Change 00:09:05.677 I/O Commands 00:09:05.677 ------------ 00:09:05.677 Flush (00h): Supported LBA-Change 00:09:05.677 Write (01h): Supported LBA-Change 00:09:05.677 Read (02h): Supported 00:09:05.677 Compare (05h): Supported 00:09:05.677 Write Zeroes (08h): Supported LBA-Change 00:09:05.677 Dataset Management (09h): Supported LBA-Change 00:09:05.677 Unknown (0Ch): Supported 00:09:05.677 Unknown (12h): Supported 00:09:05.677 Copy (19h): Supported LBA-Change 00:09:05.677 Unknown (1Dh): Supported LBA-Change 00:09:05.677 00:09:05.677 Error Log 00:09:05.677 ========= 00:09:05.677 00:09:05.677 Arbitration 00:09:05.677 =========== 00:09:05.677 Arbitration Burst: no limit 00:09:05.677 00:09:05.677 Power Management 00:09:05.677 ================ 00:09:05.677 Number of Power States: 1 00:09:05.677 Current Power State: Power State #0 00:09:05.677 Power State #0: 00:09:05.677 Max Power: 25.00 W 00:09:05.677 Non-Operational State: Operational 00:09:05.677 Entry Latency: 16 microseconds 00:09:05.677 Exit Latency: 4 microseconds 00:09:05.677 Relative Read Throughput: 0 00:09:05.677 Relative Read Latency: 0 00:09:05.677 Relative Write Throughput: 0 00:09:05.677 Relative Write Latency: 0 00:09:05.677 Idle Power: Not Reported 00:09:05.677 Active Power: Not Reported 00:09:05.677 Non-Operational Permissive Mode: Not Supported 00:09:05.677 00:09:05.677 Health Information 00:09:05.677 ================== 00:09:05.677 Critical Warnings: 00:09:05.677 Available Spare Space: OK 00:09:05.677 Temperature: OK 00:09:05.677 Device Reliability: OK 00:09:05.677 Read Only: No 00:09:05.677 Volatile Memory Backup: OK 00:09:05.677 Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.677 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:05.677 Available Spare: 0% 00:09:05.677 Available Spare Threshold: 0% 00:09:05.677 Life Percentage Used: 0% 00:09:05.677 Data Units Read: 797 00:09:05.677 Data Units Written: 726 00:09:05.677 Host Read Commands: 36421 00:09:05.677 Host Write Commands: 35844 00:09:05.677 Controller Busy Time: 0 minutes 00:09:05.677 Power Cycles: 0 00:09:05.677 Power On Hours: 0 hours 00:09:05.677 Unsafe Shutdowns: 0 00:09:05.677 Unrecoverable Media Errors: 0 00:09:05.677 Lifetime Error Log Entries: 0 00:09:05.677 Warning Temperature Time: 0 minutes 00:09:05.677 Critical Temperature Time: 0 minutes 00:09:05.677 00:09:05.677 Number of Queues 00:09:05.677 ================ 00:09:05.677 Number of I/O Submission Queues: 64 00:09:05.677 Number of I/O Completion Queues: 64 00:09:05.677 00:09:05.677 ZNS Specific Controller Data 00:09:05.677 ============================ 00:09:05.677 Zone Append Size Limit: 0 00:09:05.677 00:09:05.677 00:09:05.677 Active Namespaces 00:09:05.677 ================= 00:09:05.677 Namespace ID:1 00:09:05.677 Error Recovery Timeout: Unlimited 00:09:05.677 Command Set Identifier: NVM (00h) 00:09:05.678 Deallocate: Supported 00:09:05.678 Deallocated/Unwritten Error: Supported 00:09:05.678 Deallocated Read Value: All 0x00 00:09:05.678 Deallocate in Write Zeroes: Not Supported 00:09:05.678 Deallocated Guard Field: 0xFFFF 00:09:05.678 Flush: Supported 00:09:05.678 Reservation: Not Supported 00:09:05.678 Namespace Sharing Capabilities: Multiple Controllers 00:09:05.678 Size (in LBAs): 262144 (1GiB) 00:09:05.678 Capacity (in LBAs): 262144 (1GiB) 00:09:05.678 Utilization (in LBAs): 262144 (1GiB) 00:09:05.678 Thin Provisioning: Not Supported 00:09:05.678 Per-NS Atomic Units: No 00:09:05.678 Maximum Single Source Range Length: 128 00:09:05.678 Maximum Copy Length: 128 00:09:05.678 Maximum Source Range Count: 128 00:09:05.678 NGUID/EUI64 Never Reused: No 00:09:05.678 Namespace Write Protected: No 00:09:05.678 Endurance group ID: 1 00:09:05.678 Number of LBA Formats: 8 00:09:05.678 Current LBA Format: LBA Format #04 00:09:05.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:05.678 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:05.678 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:05.678 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:05.678 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:05.678 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:05.678 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:05.678 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:05.678 00:09:05.678 Get Feature FDP: 00:09:05.678 ================ 00:09:05.678 Enabled: Yes 00:09:05.678 FDP configuration index: 0 00:09:05.678 00:09:05.678 FDP configurations log page 00:09:05.678 =========================== 00:09:05.678 Number of FDP configurations: 1 00:09:05.678 Version: 0 00:09:05.678 Size: 112 00:09:05.678 FDP Configuration Descriptor: 0 00:09:05.678 Descriptor Size: 96 00:09:05.678 Reclaim Group Identifier format: 2 00:09:05.678 FDP Volatile Write Cache: Not Present 00:09:05.678 FDP Configuration: Valid 00:09:05.678 Vendor Specific Size: 0 00:09:05.678 Number of Reclaim Groups: 2 00:09:05.678 Number of Recalim Unit Handles: 8 00:09:05.678 Max Placement Identifiers: 128 00:09:05.678 Number of Namespaces Suppprted: 256 00:09:05.678 Reclaim unit Nominal Size: 6000000 bytes 00:09:05.678 Estimated Reclaim Unit Time Limit: Not Reported 00:09:05.678 RUH Desc #000: RUH Type: Initially Isolated 00:09:05.678 RUH Desc #001: RUH Type: Initially Isolated 00:09:05.678 RUH Desc #002: RUH Type: Initially Isolated 00:09:05.678 RUH Desc #003: RUH Type: Initially Isolated 00:09:05.678 RUH Desc #004: RUH Type: Initially Isolated 00:09:05.678 RUH Desc #005: RUH Type: Initially Isolated 00:09:05.678 RUH Desc #006: RUH Type: Initially Isolated 00:09:05.678 RUH Desc #007: RUH Type: Initially Isolated 00:09:05.678 00:09:05.678 FDP reclaim unit handle usage log page 00:09:05.678 ====================================== 00:09:05.678 Number of Reclaim Unit Handles: 8 00:09:05.678 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:05.678 RUH Usage Desc #001: RUH Attributes: Unused 00:09:05.678 RUH Usage Desc #002: RUH Attributes: Unused 00:09:05.678 RUH Usage Desc #003: RUH Attributes: Unused 00:09:05.678 RUH Usage Desc #004: RUH Attributes: Unused 00:09:05.678 RUH Usage Desc #005: RUH Attributes: Unused 00:09:05.678 RUH Usage Desc #006: RUH Attributes: Unused 00:09:05.678 RUH Usage Desc #007: RUH Attributes: Unused 00:09:05.678 00:09:05.678 FDP statistics log page 00:09:05.678 ======================= 00:09:05.678 Host bytes with metadata written: 457023488 00:09:05.678 Medi[2024-11-21 04:55:22.238269] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74832 terminated unexpected 00:09:05.678 a bytes with metadata written: 457068544 00:09:05.678 Media bytes erased: 0 00:09:05.678 00:09:05.678 FDP events log page 00:09:05.678 =================== 00:09:05.678 Number of FDP events: 0 00:09:05.678 00:09:05.678 NVM Specific Namespace Data 00:09:05.678 =========================== 00:09:05.678 Logical Block Storage Tag Mask: 0 00:09:05.678 Protection Information Capabilities: 00:09:05.678 16b Guard Protection Information Storage Tag Support: No 00:09:05.678 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:05.678 Storage Tag Check Read Support: No 00:09:05.678 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.678 ===================================================== 00:09:05.678 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:05.678 ===================================================== 00:09:05.678 Controller Capabilities/Features 00:09:05.678 ================================ 00:09:05.678 Vendor ID: 1b36 00:09:05.678 Subsystem Vendor ID: 1af4 00:09:05.678 Serial Number: 12342 00:09:05.678 Model Number: QEMU NVMe Ctrl 00:09:05.678 Firmware Version: 8.0.0 00:09:05.678 Recommended Arb Burst: 6 00:09:05.678 IEEE OUI Identifier: 00 54 52 00:09:05.678 Multi-path I/O 00:09:05.678 May have multiple subsystem ports: No 00:09:05.678 May have multiple controllers: No 00:09:05.678 Associated with SR-IOV VF: No 00:09:05.678 Max Data Transfer Size: 524288 00:09:05.678 Max Number of Namespaces: 256 00:09:05.678 Max Number of I/O Queues: 64 00:09:05.678 NVMe Specification Version (VS): 1.4 00:09:05.678 NVMe Specification Version (Identify): 1.4 00:09:05.678 Maximum Queue Entries: 2048 00:09:05.678 Contiguous Queues Required: Yes 00:09:05.678 Arbitration Mechanisms Supported 00:09:05.678 Weighted Round Robin: Not Supported 00:09:05.678 Vendor Specific: Not Supported 00:09:05.678 Reset Timeout: 7500 ms 00:09:05.678 Doorbell Stride: 4 bytes 00:09:05.678 NVM Subsystem Reset: Not Supported 00:09:05.678 Command Sets Supported 00:09:05.678 NVM Command Set: Supported 00:09:05.678 Boot Partition: Not Supported 00:09:05.678 Memory Page Size Minimum: 4096 bytes 00:09:05.678 Memory Page Size Maximum: 65536 bytes 00:09:05.678 Persistent Memory Region: Not Supported 00:09:05.678 Optional Asynchronous Events Supported 00:09:05.678 Namespace Attribute Notices: Supported 00:09:05.678 Firmware Activation Notices: Not Supported 00:09:05.678 ANA Change Notices: Not Supported 00:09:05.678 PLE Aggregate Log Change Notices: Not Supported 00:09:05.678 LBA Status Info Alert Notices: Not Supported 00:09:05.678 EGE Aggregate Log Change Notices: Not Supported 00:09:05.678 Normal NVM Subsystem Shutdown event: Not Supported 00:09:05.678 Zone Descriptor Change Notices: Not Supported 00:09:05.678 Discovery Log Change Notices: Not Supported 00:09:05.678 Controller Attributes 00:09:05.678 128-bit Host Identifier: Not Supported 00:09:05.678 Non-Operational Permissive Mode: Not Supported 00:09:05.678 NVM Sets: Not Supported 00:09:05.678 Read Recovery Levels: Not Supported 00:09:05.678 Endurance Groups: Not Supported 00:09:05.678 Predictable Latency Mode: Not Supported 00:09:05.678 Traffic Based Keep ALive: Not Supported 00:09:05.678 Namespace Granularity: Not Supported 00:09:05.678 SQ Associations: Not Supported 00:09:05.678 UUID List: Not Supported 00:09:05.678 Multi-Domain Subsystem: Not Supported 00:09:05.678 Fixed Capacity Management: Not Supported 00:09:05.678 Variable Capacity Management: Not Supported 00:09:05.678 Delete Endurance Group: Not Supported 00:09:05.678 Delete NVM Set: Not Supported 00:09:05.678 Extended LBA Formats Supported: Supported 00:09:05.678 Flexible Data Placement Supported: Not Supported 00:09:05.678 00:09:05.678 Controller Memory Buffer Support 00:09:05.678 ================================ 00:09:05.678 Supported: No 00:09:05.678 00:09:05.678 Persistent Memory Region Support 00:09:05.678 ================================ 00:09:05.678 Supported: No 00:09:05.678 00:09:05.678 Admin Command Set Attributes 00:09:05.678 ============================ 00:09:05.678 Security Send/Receive: Not Supported 00:09:05.678 Format NVM: Supported 00:09:05.678 Firmware Activate/Download: Not Supported 00:09:05.678 Namespace Management: Supported 00:09:05.678 Device Self-Test: Not Supported 00:09:05.678 Directives: Supported 00:09:05.678 NVMe-MI: Not Supported 00:09:05.678 Virtualization Management: Not Supported 00:09:05.678 Doorbell Buffer Config: Supported 00:09:05.678 Get LBA Status Capability: Not Supported 00:09:05.678 Command & Feature Lockdown Capability: Not Supported 00:09:05.678 Abort Command Limit: 4 00:09:05.678 Async Event Request Limit: 4 00:09:05.678 Number of Firmware Slots: N/A 00:09:05.678 Firmware Slot 1 Read-Only: N/A 00:09:05.678 Firmware Activation Without Reset: N/A 00:09:05.678 Multiple Update Detection Support: N/A 00:09:05.678 Firmware Update Granularity: No Information Provided 00:09:05.678 Per-Namespace SMART Log: Yes 00:09:05.678 Asymmetric Namespace Access Log Page: Not Supported 00:09:05.678 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:05.678 Command Effects Log Page: Supported 00:09:05.678 Get Log Page Extended Data: Supported 00:09:05.678 Telemetry Log Pages: Not Supported 00:09:05.678 Persistent Event Log Pages: Not Supported 00:09:05.678 Supported Log Pages Log Page: May Support 00:09:05.678 Commands Supported & Effects Log Page: Not Supported 00:09:05.678 Feature Identifiers & Effects Log Page:May Support 00:09:05.678 NVMe-MI Commands & Effects Log Page: May Support 00:09:05.678 Data Area 4 for Telemetry Log: Not Supported 00:09:05.678 Error Log Page Entries Supported: 1 00:09:05.678 Keep Alive: Not Supported 00:09:05.678 00:09:05.678 NVM Command Set Attributes 00:09:05.678 ========================== 00:09:05.678 Submission Queue Entry Size 00:09:05.678 Max: 64 00:09:05.678 Min: 64 00:09:05.678 Completion Queue Entry Size 00:09:05.678 Max: 16 00:09:05.678 Min: 16 00:09:05.678 Number of Namespaces: 256 00:09:05.678 Compare Command: Supported 00:09:05.678 Write Uncorrectable Command: Not Supported 00:09:05.678 Dataset Management Command: Supported 00:09:05.678 Write Zeroes Command: Supported 00:09:05.678 Set Features Save Field: Supported 00:09:05.678 Reservations: Not Supported 00:09:05.678 Timestamp: Supported 00:09:05.678 Copy: Supported 00:09:05.678 Volatile Write Cache: Present 00:09:05.678 Atomic Write Unit (Normal): 1 00:09:05.678 Atomic Write Unit (PFail): 1 00:09:05.678 Atomic Compare & Write Unit: 1 00:09:05.678 Fused Compare & Write: Not Supported 00:09:05.678 Scatter-Gather List 00:09:05.678 SGL Command Set: Supported 00:09:05.678 SGL Keyed: Not Supported 00:09:05.678 SGL Bit Bucket Descriptor: Not Supported 00:09:05.678 SGL Metadata Pointer: Not Supported 00:09:05.678 Oversized SGL: Not Supported 00:09:05.678 SGL Metadata Address: Not Supported 00:09:05.678 SGL Offset: Not Supported 00:09:05.678 Transport SGL Data Block: Not Supported 00:09:05.678 Replay Protected Memory Block: Not Supported 00:09:05.678 00:09:05.678 Firmware Slot Information 00:09:05.678 ========================= 00:09:05.678 Active slot: 1 00:09:05.678 Slot 1 Firmware Revision: 1.0 00:09:05.678 00:09:05.678 00:09:05.678 Commands Supported and Effects 00:09:05.678 ============================== 00:09:05.678 Admin Commands 00:09:05.678 -------------- 00:09:05.678 Delete I/O Submission Queue (00h): Supported 00:09:05.678 Create I/O Submission Queue (01h): Supported 00:09:05.678 Get Log Page (02h): Supported 00:09:05.678 Delete I/O Completion Queue (04h): Supported 00:09:05.678 Create I/O Completion Queue (05h): Supported 00:09:05.678 Identify (06h): Supported 00:09:05.678 Abort (08h): Supported 00:09:05.678 Set Features (09h): Supported 00:09:05.678 Get Features (0Ah): Supported 00:09:05.678 Asynchronous Event Request (0Ch): Supported 00:09:05.678 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:05.678 Directive Send (19h): Supported 00:09:05.678 Directive Receive (1Ah): Supported 00:09:05.678 Virtualization Management (1Ch): Supported 00:09:05.678 Doorbell Buffer Config (7Ch): Supported 00:09:05.678 Format NVM (80h): Supported LBA-Change 00:09:05.678 I/O Commands 00:09:05.678 ------------ 00:09:05.678 Flush (00h): Supported LBA-Change 00:09:05.678 Write (01h): Supported LBA-Change 00:09:05.678 Read (02h): Supported 00:09:05.678 Compare (05h): Supported 00:09:05.678 Write Zeroes (08h): Supported LBA-Change 00:09:05.678 Dataset Management (09h): Supported LBA-Change 00:09:05.678 Unknown (0Ch): Supported 00:09:05.678 Unknown (12h): Supported 00:09:05.678 Copy (19h): Supported LBA-Change 00:09:05.678 Unknown (1Dh): Supported LBA-Change 00:09:05.678 00:09:05.678 Error Log 00:09:05.678 ========= 00:09:05.678 00:09:05.678 Arbitration 00:09:05.678 =========== 00:09:05.678 Arbitration Burst: no limit 00:09:05.678 00:09:05.678 Power Management 00:09:05.678 ================ 00:09:05.678 Number of Power States: 1 00:09:05.678 Current Power State: Power State #0 00:09:05.678 Power State #0: 00:09:05.678 Max Power: 25.00 W 00:09:05.678 Non-Operational State: Operational 00:09:05.678 Entry Latency: 16 microseconds 00:09:05.678 Exit Latency: 4 microseconds 00:09:05.678 Relative Read Throughput: 0 00:09:05.678 Relative Read Latency: 0 00:09:05.678 Relative Write Throughput: 0 00:09:05.678 Relative Write Latency: 0 00:09:05.678 Idle Power: Not Reported 00:09:05.678 Active Power: Not Reported 00:09:05.678 Non-Operational Permissive Mode: Not Supported 00:09:05.678 00:09:05.678 Health Information 00:09:05.678 ================== 00:09:05.678 Critical Warnings: 00:09:05.678 Available Spare Space: OK 00:09:05.678 Temperature: OK 00:09:05.678 Device Reliability: OK 00:09:05.678 Read Only: No 00:09:05.678 Volatile Memory Backup: OK 00:09:05.678 Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.678 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:05.678 Available Spare: 0% 00:09:05.679 Available Spare Threshold: 0% 00:09:05.679 Life Percentage Used: 0% 00:09:05.679 Data Units Read: 2161 00:09:05.679 Data Units Written: 1948 00:09:05.679 Host Read Commands: 107123 00:09:05.679 Host Write Commands: 105393 00:09:05.679 Controller Busy Time: 0 minutes 00:09:05.679 Power Cycles: 0 00:09:05.679 Power On Hours: 0 hours 00:09:05.679 Unsafe Shutdowns: 0 00:09:05.679 Unrecoverable Media Errors: 0 00:09:05.679 Lifetime Error Log Entries: 0 00:09:05.679 Warning Temperature Time: 0 minutes 00:09:05.679 Critical Temperature Time: 0 minutes 00:09:05.679 00:09:05.679 Number of Queues 00:09:05.679 ================ 00:09:05.679 Number of I/O Submission Queues: 64 00:09:05.679 Number of I/O Completion Queues: 64 00:09:05.679 00:09:05.679 ZNS Specific Controller Data 00:09:05.679 ============================ 00:09:05.679 Zone Append Size Limit: 0 00:09:05.679 00:09:05.679 00:09:05.679 Active Namespaces 00:09:05.679 ================= 00:09:05.679 Namespace ID:1 00:09:05.679 Error Recovery Timeout: Unlimited 00:09:05.679 Command Set Identifier: NVM (00h) 00:09:05.679 Deallocate: Supported 00:09:05.679 Deallocated/Unwritten Error: Supported 00:09:05.679 Deallocated Read Value: All 0x00 00:09:05.679 Deallocate in Write Zeroes: Not Supported 00:09:05.679 Deallocated Guard Field: 0xFFFF 00:09:05.679 Flush: Supported 00:09:05.679 Reservation: Not Supported 00:09:05.679 Namespace Sharing Capabilities: Private 00:09:05.679 Size (in LBAs): 1048576 (4GiB) 00:09:05.679 Capacity (in LBAs): 1048576 (4GiB) 00:09:05.679 Utilization (in LBAs): 1048576 (4GiB) 00:09:05.679 Thin Provisioning: Not Supported 00:09:05.679 Per-NS Atomic Units: No 00:09:05.679 Maximum Single Source Range Length: 128 00:09:05.679 Maximum Copy Length: 128 00:09:05.679 Maximum Source Range Count: 128 00:09:05.679 NGUID/EUI64 Never Reused: No 00:09:05.679 Namespace Write Protected: No 00:09:05.679 Number of LBA Formats: 8 00:09:05.679 Current LBA Format: LBA Format #04 00:09:05.679 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:05.679 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:05.679 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:05.679 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:05.679 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:05.679 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:05.679 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:05.679 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:05.679 00:09:05.679 NVM Specific Namespace Data 00:09:05.679 =========================== 00:09:05.679 Logical Block Storage Tag Mask: 0 00:09:05.679 Protection Information Capabilities: 00:09:05.679 16b Guard Protection Information Storage Tag Support: No 00:09:05.679 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:05.679 Storage Tag Check Read Support: No 00:09:05.679 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Namespace ID:2 00:09:05.679 Error Recovery Timeout: Unlimited 00:09:05.679 Command Set Identifier: NVM (00h) 00:09:05.679 Deallocate: Supported 00:09:05.679 Deallocated/Unwritten Error: Supported 00:09:05.679 Deallocated Read Value: All 0x00 00:09:05.679 Deallocate in Write Zeroes: Not Supported 00:09:05.679 Deallocated Guard Field: 0xFFFF 00:09:05.679 Flush: Supported 00:09:05.679 Reservation: Not Supported 00:09:05.679 Namespace Sharing Capabilities: Private 00:09:05.679 Size (in LBAs): 1048576 (4GiB) 00:09:05.679 Capacity (in LBAs): 1048576 (4GiB) 00:09:05.679 Utilization (in LBAs): 1048576 (4GiB) 00:09:05.679 Thin Provisioning: Not Supported 00:09:05.679 Per-NS Atomic Units: No 00:09:05.679 Maximum Single Source Range Length: 128 00:09:05.679 Maximum Copy Length: 128 00:09:05.679 Maximum Source Range Count: 128 00:09:05.679 NGUID/EUI64 Never Reused: No 00:09:05.679 Namespace Write Protected: No 00:09:05.679 Number of LBA Formats: 8 00:09:05.679 Current LBA Format: LBA Format #04 00:09:05.679 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:05.679 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:05.679 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:05.679 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:05.679 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:05.679 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:05.679 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:05.679 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:05.679 00:09:05.679 NVM Specific Namespace Data 00:09:05.679 =========================== 00:09:05.679 Logical Block Storage Tag Mask: 0 00:09:05.679 Protection Information Capabilities: 00:09:05.679 16b Guard Protection Information Storage Tag Support: No 00:09:05.679 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:05.679 Storage Tag Check Read Support: No 00:09:05.679 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Namespace ID:3 00:09:05.679 Error Recovery Timeout: Unlimited 00:09:05.679 Command Set Identifier: NVM (00h) 00:09:05.679 Deallocate: Supported 00:09:05.679 Deallocated/Unwritten Error: Supported 00:09:05.679 Deallocated Read Value: All 0x00 00:09:05.679 Deallocate in Write Zeroes: Not Supported 00:09:05.679 Deallocated Guard Field: 0xFFFF 00:09:05.679 Flush: Supported 00:09:05.679 Reservation: Not Supported 00:09:05.679 Namespace Sharing Capabilities: Private 00:09:05.679 Size (in LBAs): 1048576 (4GiB) 00:09:05.679 Capacity (in LBAs): 1048576 (4GiB) 00:09:05.679 Utilization (in LBAs): 1048576 (4GiB) 00:09:05.679 Thin Provisioning: Not Supported 00:09:05.679 Per-NS Atomic Units: No 00:09:05.679 Maximum Single Source Range Length: 128 00:09:05.679 Maximum Copy Length: 128 00:09:05.679 Maximum Source Range Count: 128 00:09:05.679 NGUID/EUI64 Never Reused: No 00:09:05.679 Namespace Write Protected: No 00:09:05.679 Number of LBA Formats: 8 00:09:05.679 Current LBA Format: LBA Format #04 00:09:05.679 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:05.679 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:05.679 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:05.679 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:05.679 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:05.679 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:05.679 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:05.679 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:05.679 00:09:05.679 NVM Specific Namespace Data 00:09:05.679 =========================== 00:09:05.679 Logical Block Storage Tag Mask: 0 00:09:05.679 Protection Information Capabilities: 00:09:05.679 16b Guard Protection Information Storage Tag Support: No 00:09:05.679 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:05.679 Storage Tag Check Read Support: No 00:09:05.679 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.679 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:05.679 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:05.941 ===================================================== 00:09:05.941 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:05.941 ===================================================== 00:09:05.941 Controller Capabilities/Features 00:09:05.941 ================================ 00:09:05.941 Vendor ID: 1b36 00:09:05.941 Subsystem Vendor ID: 1af4 00:09:05.941 Serial Number: 12340 00:09:05.941 Model Number: QEMU NVMe Ctrl 00:09:05.941 Firmware Version: 8.0.0 00:09:05.941 Recommended Arb Burst: 6 00:09:05.941 IEEE OUI Identifier: 00 54 52 00:09:05.941 Multi-path I/O 00:09:05.941 May have multiple subsystem ports: No 00:09:05.941 May have multiple controllers: No 00:09:05.941 Associated with SR-IOV VF: No 00:09:05.941 Max Data Transfer Size: 524288 00:09:05.941 Max Number of Namespaces: 256 00:09:05.941 Max Number of I/O Queues: 64 00:09:05.941 NVMe Specification Version (VS): 1.4 00:09:05.941 NVMe Specification Version (Identify): 1.4 00:09:05.941 Maximum Queue Entries: 2048 00:09:05.941 Contiguous Queues Required: Yes 00:09:05.941 Arbitration Mechanisms Supported 00:09:05.941 Weighted Round Robin: Not Supported 00:09:05.941 Vendor Specific: Not Supported 00:09:05.941 Reset Timeout: 7500 ms 00:09:05.941 Doorbell Stride: 4 bytes 00:09:05.941 NVM Subsystem Reset: Not Supported 00:09:05.941 Command Sets Supported 00:09:05.941 NVM Command Set: Supported 00:09:05.941 Boot Partition: Not Supported 00:09:05.941 Memory Page Size Minimum: 4096 bytes 00:09:05.941 Memory Page Size Maximum: 65536 bytes 00:09:05.941 Persistent Memory Region: Not Supported 00:09:05.941 Optional Asynchronous Events Supported 00:09:05.941 Namespace Attribute Notices: Supported 00:09:05.941 Firmware Activation Notices: Not Supported 00:09:05.941 ANA Change Notices: Not Supported 00:09:05.941 PLE Aggregate Log Change Notices: Not Supported 00:09:05.941 LBA Status Info Alert Notices: Not Supported 00:09:05.941 EGE Aggregate Log Change Notices: Not Supported 00:09:05.941 Normal NVM Subsystem Shutdown event: Not Supported 00:09:05.941 Zone Descriptor Change Notices: Not Supported 00:09:05.941 Discovery Log Change Notices: Not Supported 00:09:05.941 Controller Attributes 00:09:05.941 128-bit Host Identifier: Not Supported 00:09:05.941 Non-Operational Permissive Mode: Not Supported 00:09:05.941 NVM Sets: Not Supported 00:09:05.941 Read Recovery Levels: Not Supported 00:09:05.941 Endurance Groups: Not Supported 00:09:05.941 Predictable Latency Mode: Not Supported 00:09:05.941 Traffic Based Keep ALive: Not Supported 00:09:05.941 Namespace Granularity: Not Supported 00:09:05.941 SQ Associations: Not Supported 00:09:05.941 UUID List: Not Supported 00:09:05.941 Multi-Domain Subsystem: Not Supported 00:09:05.941 Fixed Capacity Management: Not Supported 00:09:05.941 Variable Capacity Management: Not Supported 00:09:05.941 Delete Endurance Group: Not Supported 00:09:05.941 Delete NVM Set: Not Supported 00:09:05.941 Extended LBA Formats Supported: Supported 00:09:05.941 Flexible Data Placement Supported: Not Supported 00:09:05.941 00:09:05.941 Controller Memory Buffer Support 00:09:05.941 ================================ 00:09:05.941 Supported: No 00:09:05.941 00:09:05.941 Persistent Memory Region Support 00:09:05.941 ================================ 00:09:05.941 Supported: No 00:09:05.941 00:09:05.941 Admin Command Set Attributes 00:09:05.941 ============================ 00:09:05.941 Security Send/Receive: Not Supported 00:09:05.941 Format NVM: Supported 00:09:05.941 Firmware Activate/Download: Not Supported 00:09:05.941 Namespace Management: Supported 00:09:05.941 Device Self-Test: Not Supported 00:09:05.941 Directives: Supported 00:09:05.941 NVMe-MI: Not Supported 00:09:05.941 Virtualization Management: Not Supported 00:09:05.941 Doorbell Buffer Config: Supported 00:09:05.941 Get LBA Status Capability: Not Supported 00:09:05.941 Command & Feature Lockdown Capability: Not Supported 00:09:05.941 Abort Command Limit: 4 00:09:05.941 Async Event Request Limit: 4 00:09:05.941 Number of Firmware Slots: N/A 00:09:05.941 Firmware Slot 1 Read-Only: N/A 00:09:05.941 Firmware Activation Without Reset: N/A 00:09:05.941 Multiple Update Detection Support: N/A 00:09:05.941 Firmware Update Granularity: No Information Provided 00:09:05.941 Per-Namespace SMART Log: Yes 00:09:05.941 Asymmetric Namespace Access Log Page: Not Supported 00:09:05.941 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:05.941 Command Effects Log Page: Supported 00:09:05.941 Get Log Page Extended Data: Supported 00:09:05.941 Telemetry Log Pages: Not Supported 00:09:05.941 Persistent Event Log Pages: Not Supported 00:09:05.941 Supported Log Pages Log Page: May Support 00:09:05.941 Commands Supported & Effects Log Page: Not Supported 00:09:05.941 Feature Identifiers & Effects Log Page:May Support 00:09:05.941 NVMe-MI Commands & Effects Log Page: May Support 00:09:05.941 Data Area 4 for Telemetry Log: Not Supported 00:09:05.941 Error Log Page Entries Supported: 1 00:09:05.941 Keep Alive: Not Supported 00:09:05.941 00:09:05.941 NVM Command Set Attributes 00:09:05.941 ========================== 00:09:05.941 Submission Queue Entry Size 00:09:05.941 Max: 64 00:09:05.941 Min: 64 00:09:05.941 Completion Queue Entry Size 00:09:05.941 Max: 16 00:09:05.941 Min: 16 00:09:05.941 Number of Namespaces: 256 00:09:05.941 Compare Command: Supported 00:09:05.941 Write Uncorrectable Command: Not Supported 00:09:05.941 Dataset Management Command: Supported 00:09:05.941 Write Zeroes Command: Supported 00:09:05.942 Set Features Save Field: Supported 00:09:05.942 Reservations: Not Supported 00:09:05.942 Timestamp: Supported 00:09:05.942 Copy: Supported 00:09:05.942 Volatile Write Cache: Present 00:09:05.942 Atomic Write Unit (Normal): 1 00:09:05.942 Atomic Write Unit (PFail): 1 00:09:05.942 Atomic Compare & Write Unit: 1 00:09:05.942 Fused Compare & Write: Not Supported 00:09:05.942 Scatter-Gather List 00:09:05.942 SGL Command Set: Supported 00:09:05.942 SGL Keyed: Not Supported 00:09:05.942 SGL Bit Bucket Descriptor: Not Supported 00:09:05.942 SGL Metadata Pointer: Not Supported 00:09:05.942 Oversized SGL: Not Supported 00:09:05.942 SGL Metadata Address: Not Supported 00:09:05.942 SGL Offset: Not Supported 00:09:05.942 Transport SGL Data Block: Not Supported 00:09:05.942 Replay Protected Memory Block: Not Supported 00:09:05.942 00:09:05.942 Firmware Slot Information 00:09:05.942 ========================= 00:09:05.942 Active slot: 1 00:09:05.942 Slot 1 Firmware Revision: 1.0 00:09:05.942 00:09:05.942 00:09:05.942 Commands Supported and Effects 00:09:05.942 ============================== 00:09:05.942 Admin Commands 00:09:05.942 -------------- 00:09:05.942 Delete I/O Submission Queue (00h): Supported 00:09:05.942 Create I/O Submission Queue (01h): Supported 00:09:05.942 Get Log Page (02h): Supported 00:09:05.942 Delete I/O Completion Queue (04h): Supported 00:09:05.942 Create I/O Completion Queue (05h): Supported 00:09:05.942 Identify (06h): Supported 00:09:05.942 Abort (08h): Supported 00:09:05.942 Set Features (09h): Supported 00:09:05.942 Get Features (0Ah): Supported 00:09:05.942 Asynchronous Event Request (0Ch): Supported 00:09:05.942 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:05.942 Directive Send (19h): Supported 00:09:05.942 Directive Receive (1Ah): Supported 00:09:05.942 Virtualization Management (1Ch): Supported 00:09:05.942 Doorbell Buffer Config (7Ch): Supported 00:09:05.942 Format NVM (80h): Supported LBA-Change 00:09:05.942 I/O Commands 00:09:05.942 ------------ 00:09:05.942 Flush (00h): Supported LBA-Change 00:09:05.942 Write (01h): Supported LBA-Change 00:09:05.942 Read (02h): Supported 00:09:05.942 Compare (05h): Supported 00:09:05.942 Write Zeroes (08h): Supported LBA-Change 00:09:05.942 Dataset Management (09h): Supported LBA-Change 00:09:05.942 Unknown (0Ch): Supported 00:09:05.942 Unknown (12h): Supported 00:09:05.942 Copy (19h): Supported LBA-Change 00:09:05.942 Unknown (1Dh): Supported LBA-Change 00:09:05.942 00:09:05.942 Error Log 00:09:05.942 ========= 00:09:05.942 00:09:05.942 Arbitration 00:09:05.942 =========== 00:09:05.942 Arbitration Burst: no limit 00:09:05.942 00:09:05.942 Power Management 00:09:05.942 ================ 00:09:05.942 Number of Power States: 1 00:09:05.942 Current Power State: Power State #0 00:09:05.942 Power State #0: 00:09:05.942 Max Power: 25.00 W 00:09:05.942 Non-Operational State: Operational 00:09:05.942 Entry Latency: 16 microseconds 00:09:05.942 Exit Latency: 4 microseconds 00:09:05.942 Relative Read Throughput: 0 00:09:05.942 Relative Read Latency: 0 00:09:05.942 Relative Write Throughput: 0 00:09:05.942 Relative Write Latency: 0 00:09:05.942 Idle Power: Not Reported 00:09:05.942 Active Power: Not Reported 00:09:05.942 Non-Operational Permissive Mode: Not Supported 00:09:05.942 00:09:05.942 Health Information 00:09:05.942 ================== 00:09:05.942 Critical Warnings: 00:09:05.942 Available Spare Space: OK 00:09:05.942 Temperature: OK 00:09:05.942 Device Reliability: OK 00:09:05.942 Read Only: No 00:09:05.942 Volatile Memory Backup: OK 00:09:05.942 Current Temperature: 323 Kelvin (50 Celsius) 00:09:05.942 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:05.942 Available Spare: 0% 00:09:05.942 Available Spare Threshold: 0% 00:09:05.942 Life Percentage Used: 0% 00:09:05.942 Data Units Read: 691 00:09:05.942 Data Units Written: 619 00:09:05.942 Host Read Commands: 35174 00:09:05.942 Host Write Commands: 34960 00:09:05.942 Controller Busy Time: 0 minutes 00:09:05.942 Power Cycles: 0 00:09:05.942 Power On Hours: 0 hours 00:09:05.942 Unsafe Shutdowns: 0 00:09:05.942 Unrecoverable Media Errors: 0 00:09:05.942 Lifetime Error Log Entries: 0 00:09:05.942 Warning Temperature Time: 0 minutes 00:09:05.942 Critical Temperature Time: 0 minutes 00:09:05.942 00:09:05.942 Number of Queues 00:09:05.942 ================ 00:09:05.942 Number of I/O Submission Queues: 64 00:09:05.942 Number of I/O Completion Queues: 64 00:09:05.942 00:09:05.942 ZNS Specific Controller Data 00:09:05.942 ============================ 00:09:05.942 Zone Append Size Limit: 0 00:09:05.942 00:09:05.942 00:09:05.942 Active Namespaces 00:09:05.942 ================= 00:09:05.942 Namespace ID:1 00:09:05.942 Error Recovery Timeout: Unlimited 00:09:05.942 Command Set Identifier: NVM (00h) 00:09:05.942 Deallocate: Supported 00:09:05.942 Deallocated/Unwritten Error: Supported 00:09:05.942 Deallocated Read Value: All 0x00 00:09:05.942 Deallocate in Write Zeroes: Not Supported 00:09:05.942 Deallocated Guard Field: 0xFFFF 00:09:05.942 Flush: Supported 00:09:05.942 Reservation: Not Supported 00:09:05.942 Metadata Transferred as: Separate Metadata Buffer 00:09:05.942 Namespace Sharing Capabilities: Private 00:09:05.942 Size (in LBAs): 1548666 (5GiB) 00:09:05.942 Capacity (in LBAs): 1548666 (5GiB) 00:09:05.942 Utilization (in LBAs): 1548666 (5GiB) 00:09:05.942 Thin Provisioning: Not Supported 00:09:05.942 Per-NS Atomic Units: No 00:09:05.942 Maximum Single Source Range Length: 128 00:09:05.942 Maximum Copy Length: 128 00:09:05.942 Maximum Source Range Count: 128 00:09:05.942 NGUID/EUI64 Never Reused: No 00:09:05.942 Namespace Write Protected: No 00:09:05.942 Number of LBA Formats: 8 00:09:05.942 Current LBA Format: LBA Format #07 00:09:05.942 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:05.942 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:05.942 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:05.942 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:05.942 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:05.942 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:05.942 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:05.942 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:05.942 00:09:05.942 NVM Specific Namespace Data 00:09:05.942 =========================== 00:09:05.942 Logical Block Storage Tag Mask: 0 00:09:05.942 Protection Information Capabilities: 00:09:05.942 16b Guard Protection Information Storage Tag Support: No 00:09:05.942 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:05.942 Storage Tag Check Read Support: No 00:09:05.942 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:05.942 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:05.942 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:05.942 ===================================================== 00:09:05.942 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:05.942 ===================================================== 00:09:05.942 Controller Capabilities/Features 00:09:05.942 ================================ 00:09:05.942 Vendor ID: 1b36 00:09:05.942 Subsystem Vendor ID: 1af4 00:09:05.942 Serial Number: 12341 00:09:05.942 Model Number: QEMU NVMe Ctrl 00:09:05.942 Firmware Version: 8.0.0 00:09:05.942 Recommended Arb Burst: 6 00:09:05.942 IEEE OUI Identifier: 00 54 52 00:09:05.942 Multi-path I/O 00:09:05.942 May have multiple subsystem ports: No 00:09:05.942 May have multiple controllers: No 00:09:05.942 Associated with SR-IOV VF: No 00:09:05.942 Max Data Transfer Size: 524288 00:09:05.942 Max Number of Namespaces: 256 00:09:05.942 Max Number of I/O Queues: 64 00:09:05.942 NVMe Specification Version (VS): 1.4 00:09:05.942 NVMe Specification Version (Identify): 1.4 00:09:05.942 Maximum Queue Entries: 2048 00:09:05.942 Contiguous Queues Required: Yes 00:09:05.942 Arbitration Mechanisms Supported 00:09:05.942 Weighted Round Robin: Not Supported 00:09:05.942 Vendor Specific: Not Supported 00:09:05.942 Reset Timeout: 7500 ms 00:09:05.942 Doorbell Stride: 4 bytes 00:09:05.942 NVM Subsystem Reset: Not Supported 00:09:05.942 Command Sets Supported 00:09:05.942 NVM Command Set: Supported 00:09:05.943 Boot Partition: Not Supported 00:09:05.943 Memory Page Size Minimum: 4096 bytes 00:09:05.943 Memory Page Size Maximum: 65536 bytes 00:09:05.943 Persistent Memory Region: Not Supported 00:09:05.943 Optional Asynchronous Events Supported 00:09:05.943 Namespace Attribute Notices: Supported 00:09:05.943 Firmware Activation Notices: Not Supported 00:09:05.943 ANA Change Notices: Not Supported 00:09:05.943 PLE Aggregate Log Change Notices: Not Supported 00:09:05.943 LBA Status Info Alert Notices: Not Supported 00:09:05.943 EGE Aggregate Log Change Notices: Not Supported 00:09:05.943 Normal NVM Subsystem Shutdown event: Not Supported 00:09:05.943 Zone Descriptor Change Notices: Not Supported 00:09:05.943 Discovery Log Change Notices: Not Supported 00:09:05.943 Controller Attributes 00:09:05.943 128-bit Host Identifier: Not Supported 00:09:05.943 Non-Operational Permissive Mode: Not Supported 00:09:05.943 NVM Sets: Not Supported 00:09:05.943 Read Recovery Levels: Not Supported 00:09:05.943 Endurance Groups: Not Supported 00:09:05.943 Predictable Latency Mode: Not Supported 00:09:05.943 Traffic Based Keep ALive: Not Supported 00:09:05.943 Namespace Granularity: Not Supported 00:09:05.943 SQ Associations: Not Supported 00:09:05.943 UUID List: Not Supported 00:09:05.943 Multi-Domain Subsystem: Not Supported 00:09:05.943 Fixed Capacity Management: Not Supported 00:09:05.943 Variable Capacity Management: Not Supported 00:09:05.943 Delete Endurance Group: Not Supported 00:09:05.943 Delete NVM Set: Not Supported 00:09:05.943 Extended LBA Formats Supported: Supported 00:09:05.943 Flexible Data Placement Supported: Not Supported 00:09:05.943 00:09:05.943 Controller Memory Buffer Support 00:09:05.943 ================================ 00:09:05.943 Supported: No 00:09:05.943 00:09:05.943 Persistent Memory Region Support 00:09:05.943 ================================ 00:09:05.943 Supported: No 00:09:05.943 00:09:05.943 Admin Command Set Attributes 00:09:05.943 ============================ 00:09:05.943 Security Send/Receive: Not Supported 00:09:05.943 Format NVM: Supported 00:09:05.943 Firmware Activate/Download: Not Supported 00:09:05.943 Namespace Management: Supported 00:09:05.943 Device Self-Test: Not Supported 00:09:05.943 Directives: Supported 00:09:05.943 NVMe-MI: Not Supported 00:09:05.943 Virtualization Management: Not Supported 00:09:05.943 Doorbell Buffer Config: Supported 00:09:05.943 Get LBA Status Capability: Not Supported 00:09:05.943 Command & Feature Lockdown Capability: Not Supported 00:09:05.943 Abort Command Limit: 4 00:09:05.943 Async Event Request Limit: 4 00:09:05.943 Number of Firmware Slots: N/A 00:09:05.943 Firmware Slot 1 Read-Only: N/A 00:09:05.943 Firmware Activation Without Reset: N/A 00:09:05.943 Multiple Update Detection Support: N/A 00:09:05.943 Firmware Update Granularity: No Information Provided 00:09:05.943 Per-Namespace SMART Log: Yes 00:09:05.943 Asymmetric Namespace Access Log Page: Not Supported 00:09:05.943 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:05.943 Command Effects Log Page: Supported 00:09:05.943 Get Log Page Extended Data: Supported 00:09:05.943 Telemetry Log Pages: Not Supported 00:09:05.943 Persistent Event Log Pages: Not Supported 00:09:05.943 Supported Log Pages Log Page: May Support 00:09:05.943 Commands Supported & Effects Log Page: Not Supported 00:09:05.943 Feature Identifiers & Effects Log Page:May Support 00:09:05.943 NVMe-MI Commands & Effects Log Page: May Support 00:09:05.943 Data Area 4 for Telemetry Log: Not Supported 00:09:05.943 Error Log Page Entries Supported: 1 00:09:05.943 Keep Alive: Not Supported 00:09:05.943 00:09:05.943 NVM Command Set Attributes 00:09:05.943 ========================== 00:09:05.943 Submission Queue Entry Size 00:09:05.943 Max: 64 00:09:05.943 Min: 64 00:09:05.943 Completion Queue Entry Size 00:09:05.943 Max: 16 00:09:05.943 Min: 16 00:09:05.943 Number of Namespaces: 256 00:09:05.943 Compare Command: Supported 00:09:05.943 Write Uncorrectable Command: Not Supported 00:09:05.943 Dataset Management Command: Supported 00:09:05.943 Write Zeroes Command: Supported 00:09:05.943 Set Features Save Field: Supported 00:09:05.943 Reservations: Not Supported 00:09:05.943 Timestamp: Supported 00:09:05.943 Copy: Supported 00:09:05.943 Volatile Write Cache: Present 00:09:05.943 Atomic Write Unit (Normal): 1 00:09:05.943 Atomic Write Unit (PFail): 1 00:09:05.943 Atomic Compare & Write Unit: 1 00:09:05.943 Fused Compare & Write: Not Supported 00:09:05.943 Scatter-Gather List 00:09:05.943 SGL Command Set: Supported 00:09:05.943 SGL Keyed: Not Supported 00:09:05.943 SGL Bit Bucket Descriptor: Not Supported 00:09:05.943 SGL Metadata Pointer: Not Supported 00:09:05.943 Oversized SGL: Not Supported 00:09:05.943 SGL Metadata Address: Not Supported 00:09:05.943 SGL Offset: Not Supported 00:09:05.943 Transport SGL Data Block: Not Supported 00:09:05.943 Replay Protected Memory Block: Not Supported 00:09:05.943 00:09:05.943 Firmware Slot Information 00:09:05.943 ========================= 00:09:05.943 Active slot: 1 00:09:05.943 Slot 1 Firmware Revision: 1.0 00:09:05.943 00:09:05.943 00:09:05.943 Commands Supported and Effects 00:09:05.943 ============================== 00:09:05.943 Admin Commands 00:09:05.943 -------------- 00:09:05.943 Delete I/O Submission Queue (00h): Supported 00:09:05.943 Create I/O Submission Queue (01h): Supported 00:09:05.943 Get Log Page (02h): Supported 00:09:05.943 Delete I/O Completion Queue (04h): Supported 00:09:05.943 Create I/O Completion Queue (05h): Supported 00:09:05.943 Identify (06h): Supported 00:09:05.943 Abort (08h): Supported 00:09:05.943 Set Features (09h): Supported 00:09:05.943 Get Features (0Ah): Supported 00:09:05.943 Asynchronous Event Request (0Ch): Supported 00:09:05.943 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:05.943 Directive Send (19h): Supported 00:09:05.943 Directive Receive (1Ah): Supported 00:09:05.943 Virtualization Management (1Ch): Supported 00:09:05.943 Doorbell Buffer Config (7Ch): Supported 00:09:05.943 Format NVM (80h): Supported LBA-Change 00:09:05.943 I/O Commands 00:09:05.943 ------------ 00:09:05.943 Flush (00h): Supported LBA-Change 00:09:05.943 Write (01h): Supported LBA-Change 00:09:05.943 Read (02h): Supported 00:09:05.943 Compare (05h): Supported 00:09:05.943 Write Zeroes (08h): Supported LBA-Change 00:09:05.943 Dataset Management (09h): Supported LBA-Change 00:09:05.943 Unknown (0Ch): Supported 00:09:05.943 Unknown (12h): Supported 00:09:05.943 Copy (19h): Supported LBA-Change 00:09:05.943 Unknown (1Dh): Supported LBA-Change 00:09:05.943 00:09:05.943 Error Log 00:09:05.943 ========= 00:09:05.943 00:09:05.943 Arbitration 00:09:05.943 =========== 00:09:05.943 Arbitration Burst: no limit 00:09:05.943 00:09:05.943 Power Management 00:09:05.943 ================ 00:09:05.943 Number of Power States: 1 00:09:05.943 Current Power State: Power State #0 00:09:05.943 Power State #0: 00:09:05.943 Max Power: 25.00 W 00:09:05.943 Non-Operational State: Operational 00:09:05.943 Entry Latency: 16 microseconds 00:09:05.943 Exit Latency: 4 microseconds 00:09:05.943 Relative Read Throughput: 0 00:09:05.943 Relative Read Latency: 0 00:09:05.943 Relative Write Throughput: 0 00:09:05.943 Relative Write Latency: 0 00:09:06.206 Idle Power: Not Reported 00:09:06.206 Active Power: Not Reported 00:09:06.206 Non-Operational Permissive Mode: Not Supported 00:09:06.206 00:09:06.206 Health Information 00:09:06.206 ================== 00:09:06.206 Critical Warnings: 00:09:06.206 Available Spare Space: OK 00:09:06.206 Temperature: OK 00:09:06.206 Device Reliability: OK 00:09:06.206 Read Only: No 00:09:06.206 Volatile Memory Backup: OK 00:09:06.206 Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.206 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:06.206 Available Spare: 0% 00:09:06.206 Available Spare Threshold: 0% 00:09:06.206 Life Percentage Used: 0% 00:09:06.206 Data Units Read: 1058 00:09:06.206 Data Units Written: 918 00:09:06.206 Host Read Commands: 53243 00:09:06.206 Host Write Commands: 51945 00:09:06.206 Controller Busy Time: 0 minutes 00:09:06.206 Power Cycles: 0 00:09:06.206 Power On Hours: 0 hours 00:09:06.206 Unsafe Shutdowns: 0 00:09:06.206 Unrecoverable Media Errors: 0 00:09:06.206 Lifetime Error Log Entries: 0 00:09:06.206 Warning Temperature Time: 0 minutes 00:09:06.206 Critical Temperature Time: 0 minutes 00:09:06.206 00:09:06.206 Number of Queues 00:09:06.206 ================ 00:09:06.206 Number of I/O Submission Queues: 64 00:09:06.206 Number of I/O Completion Queues: 64 00:09:06.206 00:09:06.206 ZNS Specific Controller Data 00:09:06.206 ============================ 00:09:06.206 Zone Append Size Limit: 0 00:09:06.206 00:09:06.206 00:09:06.206 Active Namespaces 00:09:06.206 ================= 00:09:06.206 Namespace ID:1 00:09:06.206 Error Recovery Timeout: Unlimited 00:09:06.206 Command Set Identifier: NVM (00h) 00:09:06.206 Deallocate: Supported 00:09:06.206 Deallocated/Unwritten Error: Supported 00:09:06.206 Deallocated Read Value: All 0x00 00:09:06.206 Deallocate in Write Zeroes: Not Supported 00:09:06.206 Deallocated Guard Field: 0xFFFF 00:09:06.206 Flush: Supported 00:09:06.206 Reservation: Not Supported 00:09:06.206 Namespace Sharing Capabilities: Private 00:09:06.206 Size (in LBAs): 1310720 (5GiB) 00:09:06.206 Capacity (in LBAs): 1310720 (5GiB) 00:09:06.206 Utilization (in LBAs): 1310720 (5GiB) 00:09:06.206 Thin Provisioning: Not Supported 00:09:06.206 Per-NS Atomic Units: No 00:09:06.206 Maximum Single Source Range Length: 128 00:09:06.206 Maximum Copy Length: 128 00:09:06.206 Maximum Source Range Count: 128 00:09:06.206 NGUID/EUI64 Never Reused: No 00:09:06.206 Namespace Write Protected: No 00:09:06.206 Number of LBA Formats: 8 00:09:06.206 Current LBA Format: LBA Format #04 00:09:06.206 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:06.206 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:06.206 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:06.206 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:06.206 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:06.206 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:06.206 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:06.206 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:06.206 00:09:06.206 NVM Specific Namespace Data 00:09:06.206 =========================== 00:09:06.206 Logical Block Storage Tag Mask: 0 00:09:06.206 Protection Information Capabilities: 00:09:06.206 16b Guard Protection Information Storage Tag Support: No 00:09:06.206 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:06.206 Storage Tag Check Read Support: No 00:09:06.206 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.206 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:06.206 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:06.206 ===================================================== 00:09:06.206 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:06.206 ===================================================== 00:09:06.206 Controller Capabilities/Features 00:09:06.206 ================================ 00:09:06.206 Vendor ID: 1b36 00:09:06.206 Subsystem Vendor ID: 1af4 00:09:06.206 Serial Number: 12342 00:09:06.206 Model Number: QEMU NVMe Ctrl 00:09:06.206 Firmware Version: 8.0.0 00:09:06.206 Recommended Arb Burst: 6 00:09:06.206 IEEE OUI Identifier: 00 54 52 00:09:06.206 Multi-path I/O 00:09:06.206 May have multiple subsystem ports: No 00:09:06.206 May have multiple controllers: No 00:09:06.206 Associated with SR-IOV VF: No 00:09:06.206 Max Data Transfer Size: 524288 00:09:06.206 Max Number of Namespaces: 256 00:09:06.206 Max Number of I/O Queues: 64 00:09:06.206 NVMe Specification Version (VS): 1.4 00:09:06.206 NVMe Specification Version (Identify): 1.4 00:09:06.206 Maximum Queue Entries: 2048 00:09:06.206 Contiguous Queues Required: Yes 00:09:06.206 Arbitration Mechanisms Supported 00:09:06.206 Weighted Round Robin: Not Supported 00:09:06.206 Vendor Specific: Not Supported 00:09:06.206 Reset Timeout: 7500 ms 00:09:06.206 Doorbell Stride: 4 bytes 00:09:06.206 NVM Subsystem Reset: Not Supported 00:09:06.206 Command Sets Supported 00:09:06.206 NVM Command Set: Supported 00:09:06.206 Boot Partition: Not Supported 00:09:06.207 Memory Page Size Minimum: 4096 bytes 00:09:06.207 Memory Page Size Maximum: 65536 bytes 00:09:06.207 Persistent Memory Region: Not Supported 00:09:06.207 Optional Asynchronous Events Supported 00:09:06.207 Namespace Attribute Notices: Supported 00:09:06.207 Firmware Activation Notices: Not Supported 00:09:06.207 ANA Change Notices: Not Supported 00:09:06.207 PLE Aggregate Log Change Notices: Not Supported 00:09:06.207 LBA Status Info Alert Notices: Not Supported 00:09:06.207 EGE Aggregate Log Change Notices: Not Supported 00:09:06.207 Normal NVM Subsystem Shutdown event: Not Supported 00:09:06.207 Zone Descriptor Change Notices: Not Supported 00:09:06.207 Discovery Log Change Notices: Not Supported 00:09:06.207 Controller Attributes 00:09:06.207 128-bit Host Identifier: Not Supported 00:09:06.207 Non-Operational Permissive Mode: Not Supported 00:09:06.207 NVM Sets: Not Supported 00:09:06.207 Read Recovery Levels: Not Supported 00:09:06.207 Endurance Groups: Not Supported 00:09:06.207 Predictable Latency Mode: Not Supported 00:09:06.207 Traffic Based Keep ALive: Not Supported 00:09:06.207 Namespace Granularity: Not Supported 00:09:06.207 SQ Associations: Not Supported 00:09:06.207 UUID List: Not Supported 00:09:06.207 Multi-Domain Subsystem: Not Supported 00:09:06.207 Fixed Capacity Management: Not Supported 00:09:06.207 Variable Capacity Management: Not Supported 00:09:06.207 Delete Endurance Group: Not Supported 00:09:06.207 Delete NVM Set: Not Supported 00:09:06.207 Extended LBA Formats Supported: Supported 00:09:06.207 Flexible Data Placement Supported: Not Supported 00:09:06.207 00:09:06.207 Controller Memory Buffer Support 00:09:06.207 ================================ 00:09:06.207 Supported: No 00:09:06.207 00:09:06.207 Persistent Memory Region Support 00:09:06.207 ================================ 00:09:06.207 Supported: No 00:09:06.207 00:09:06.207 Admin Command Set Attributes 00:09:06.207 ============================ 00:09:06.207 Security Send/Receive: Not Supported 00:09:06.207 Format NVM: Supported 00:09:06.207 Firmware Activate/Download: Not Supported 00:09:06.207 Namespace Management: Supported 00:09:06.207 Device Self-Test: Not Supported 00:09:06.207 Directives: Supported 00:09:06.207 NVMe-MI: Not Supported 00:09:06.207 Virtualization Management: Not Supported 00:09:06.207 Doorbell Buffer Config: Supported 00:09:06.207 Get LBA Status Capability: Not Supported 00:09:06.207 Command & Feature Lockdown Capability: Not Supported 00:09:06.207 Abort Command Limit: 4 00:09:06.207 Async Event Request Limit: 4 00:09:06.207 Number of Firmware Slots: N/A 00:09:06.207 Firmware Slot 1 Read-Only: N/A 00:09:06.207 Firmware Activation Without Reset: N/A 00:09:06.207 Multiple Update Detection Support: N/A 00:09:06.207 Firmware Update Granularity: No Information Provided 00:09:06.207 Per-Namespace SMART Log: Yes 00:09:06.207 Asymmetric Namespace Access Log Page: Not Supported 00:09:06.207 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:06.207 Command Effects Log Page: Supported 00:09:06.207 Get Log Page Extended Data: Supported 00:09:06.207 Telemetry Log Pages: Not Supported 00:09:06.207 Persistent Event Log Pages: Not Supported 00:09:06.207 Supported Log Pages Log Page: May Support 00:09:06.207 Commands Supported & Effects Log Page: Not Supported 00:09:06.207 Feature Identifiers & Effects Log Page:May Support 00:09:06.207 NVMe-MI Commands & Effects Log Page: May Support 00:09:06.207 Data Area 4 for Telemetry Log: Not Supported 00:09:06.207 Error Log Page Entries Supported: 1 00:09:06.207 Keep Alive: Not Supported 00:09:06.207 00:09:06.207 NVM Command Set Attributes 00:09:06.207 ========================== 00:09:06.207 Submission Queue Entry Size 00:09:06.207 Max: 64 00:09:06.207 Min: 64 00:09:06.207 Completion Queue Entry Size 00:09:06.207 Max: 16 00:09:06.207 Min: 16 00:09:06.207 Number of Namespaces: 256 00:09:06.207 Compare Command: Supported 00:09:06.207 Write Uncorrectable Command: Not Supported 00:09:06.207 Dataset Management Command: Supported 00:09:06.207 Write Zeroes Command: Supported 00:09:06.207 Set Features Save Field: Supported 00:09:06.207 Reservations: Not Supported 00:09:06.207 Timestamp: Supported 00:09:06.207 Copy: Supported 00:09:06.207 Volatile Write Cache: Present 00:09:06.207 Atomic Write Unit (Normal): 1 00:09:06.207 Atomic Write Unit (PFail): 1 00:09:06.207 Atomic Compare & Write Unit: 1 00:09:06.207 Fused Compare & Write: Not Supported 00:09:06.207 Scatter-Gather List 00:09:06.207 SGL Command Set: Supported 00:09:06.207 SGL Keyed: Not Supported 00:09:06.207 SGL Bit Bucket Descriptor: Not Supported 00:09:06.207 SGL Metadata Pointer: Not Supported 00:09:06.207 Oversized SGL: Not Supported 00:09:06.207 SGL Metadata Address: Not Supported 00:09:06.207 SGL Offset: Not Supported 00:09:06.207 Transport SGL Data Block: Not Supported 00:09:06.207 Replay Protected Memory Block: Not Supported 00:09:06.207 00:09:06.207 Firmware Slot Information 00:09:06.207 ========================= 00:09:06.207 Active slot: 1 00:09:06.207 Slot 1 Firmware Revision: 1.0 00:09:06.207 00:09:06.207 00:09:06.207 Commands Supported and Effects 00:09:06.207 ============================== 00:09:06.207 Admin Commands 00:09:06.207 -------------- 00:09:06.207 Delete I/O Submission Queue (00h): Supported 00:09:06.207 Create I/O Submission Queue (01h): Supported 00:09:06.207 Get Log Page (02h): Supported 00:09:06.207 Delete I/O Completion Queue (04h): Supported 00:09:06.207 Create I/O Completion Queue (05h): Supported 00:09:06.207 Identify (06h): Supported 00:09:06.207 Abort (08h): Supported 00:09:06.207 Set Features (09h): Supported 00:09:06.207 Get Features (0Ah): Supported 00:09:06.207 Asynchronous Event Request (0Ch): Supported 00:09:06.207 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:06.207 Directive Send (19h): Supported 00:09:06.207 Directive Receive (1Ah): Supported 00:09:06.207 Virtualization Management (1Ch): Supported 00:09:06.207 Doorbell Buffer Config (7Ch): Supported 00:09:06.207 Format NVM (80h): Supported LBA-Change 00:09:06.207 I/O Commands 00:09:06.207 ------------ 00:09:06.207 Flush (00h): Supported LBA-Change 00:09:06.207 Write (01h): Supported LBA-Change 00:09:06.207 Read (02h): Supported 00:09:06.207 Compare (05h): Supported 00:09:06.207 Write Zeroes (08h): Supported LBA-Change 00:09:06.207 Dataset Management (09h): Supported LBA-Change 00:09:06.207 Unknown (0Ch): Supported 00:09:06.207 Unknown (12h): Supported 00:09:06.207 Copy (19h): Supported LBA-Change 00:09:06.207 Unknown (1Dh): Supported LBA-Change 00:09:06.207 00:09:06.207 Error Log 00:09:06.207 ========= 00:09:06.207 00:09:06.207 Arbitration 00:09:06.207 =========== 00:09:06.207 Arbitration Burst: no limit 00:09:06.207 00:09:06.207 Power Management 00:09:06.207 ================ 00:09:06.207 Number of Power States: 1 00:09:06.207 Current Power State: Power State #0 00:09:06.207 Power State #0: 00:09:06.207 Max Power: 25.00 W 00:09:06.207 Non-Operational State: Operational 00:09:06.207 Entry Latency: 16 microseconds 00:09:06.207 Exit Latency: 4 microseconds 00:09:06.207 Relative Read Throughput: 0 00:09:06.207 Relative Read Latency: 0 00:09:06.207 Relative Write Throughput: 0 00:09:06.207 Relative Write Latency: 0 00:09:06.207 Idle Power: Not Reported 00:09:06.207 Active Power: Not Reported 00:09:06.207 Non-Operational Permissive Mode: Not Supported 00:09:06.207 00:09:06.207 Health Information 00:09:06.207 ================== 00:09:06.207 Critical Warnings: 00:09:06.207 Available Spare Space: OK 00:09:06.207 Temperature: OK 00:09:06.208 Device Reliability: OK 00:09:06.208 Read Only: No 00:09:06.208 Volatile Memory Backup: OK 00:09:06.208 Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.208 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:06.208 Available Spare: 0% 00:09:06.208 Available Spare Threshold: 0% 00:09:06.208 Life Percentage Used: 0% 00:09:06.208 Data Units Read: 2161 00:09:06.208 Data Units Written: 1948 00:09:06.208 Host Read Commands: 107123 00:09:06.208 Host Write Commands: 105393 00:09:06.208 Controller Busy Time: 0 minutes 00:09:06.208 Power Cycles: 0 00:09:06.208 Power On Hours: 0 hours 00:09:06.208 Unsafe Shutdowns: 0 00:09:06.208 Unrecoverable Media Errors: 0 00:09:06.208 Lifetime Error Log Entries: 0 00:09:06.208 Warning Temperature Time: 0 minutes 00:09:06.208 Critical Temperature Time: 0 minutes 00:09:06.208 00:09:06.208 Number of Queues 00:09:06.208 ================ 00:09:06.208 Number of I/O Submission Queues: 64 00:09:06.208 Number of I/O Completion Queues: 64 00:09:06.208 00:09:06.208 ZNS Specific Controller Data 00:09:06.208 ============================ 00:09:06.208 Zone Append Size Limit: 0 00:09:06.208 00:09:06.208 00:09:06.208 Active Namespaces 00:09:06.208 ================= 00:09:06.208 Namespace ID:1 00:09:06.208 Error Recovery Timeout: Unlimited 00:09:06.208 Command Set Identifier: NVM (00h) 00:09:06.208 Deallocate: Supported 00:09:06.208 Deallocated/Unwritten Error: Supported 00:09:06.208 Deallocated Read Value: All 0x00 00:09:06.208 Deallocate in Write Zeroes: Not Supported 00:09:06.208 Deallocated Guard Field: 0xFFFF 00:09:06.208 Flush: Supported 00:09:06.208 Reservation: Not Supported 00:09:06.208 Namespace Sharing Capabilities: Private 00:09:06.208 Size (in LBAs): 1048576 (4GiB) 00:09:06.208 Capacity (in LBAs): 1048576 (4GiB) 00:09:06.208 Utilization (in LBAs): 1048576 (4GiB) 00:09:06.208 Thin Provisioning: Not Supported 00:09:06.208 Per-NS Atomic Units: No 00:09:06.208 Maximum Single Source Range Length: 128 00:09:06.208 Maximum Copy Length: 128 00:09:06.208 Maximum Source Range Count: 128 00:09:06.208 NGUID/EUI64 Never Reused: No 00:09:06.208 Namespace Write Protected: No 00:09:06.208 Number of LBA Formats: 8 00:09:06.208 Current LBA Format: LBA Format #04 00:09:06.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:06.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:06.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:06.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:06.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:06.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:06.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:06.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:06.208 00:09:06.208 NVM Specific Namespace Data 00:09:06.208 =========================== 00:09:06.208 Logical Block Storage Tag Mask: 0 00:09:06.208 Protection Information Capabilities: 00:09:06.208 16b Guard Protection Information Storage Tag Support: No 00:09:06.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:06.208 Storage Tag Check Read Support: No 00:09:06.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Namespace ID:2 00:09:06.208 Error Recovery Timeout: Unlimited 00:09:06.208 Command Set Identifier: NVM (00h) 00:09:06.208 Deallocate: Supported 00:09:06.208 Deallocated/Unwritten Error: Supported 00:09:06.208 Deallocated Read Value: All 0x00 00:09:06.208 Deallocate in Write Zeroes: Not Supported 00:09:06.208 Deallocated Guard Field: 0xFFFF 00:09:06.208 Flush: Supported 00:09:06.208 Reservation: Not Supported 00:09:06.208 Namespace Sharing Capabilities: Private 00:09:06.208 Size (in LBAs): 1048576 (4GiB) 00:09:06.208 Capacity (in LBAs): 1048576 (4GiB) 00:09:06.208 Utilization (in LBAs): 1048576 (4GiB) 00:09:06.208 Thin Provisioning: Not Supported 00:09:06.208 Per-NS Atomic Units: No 00:09:06.208 Maximum Single Source Range Length: 128 00:09:06.208 Maximum Copy Length: 128 00:09:06.208 Maximum Source Range Count: 128 00:09:06.208 NGUID/EUI64 Never Reused: No 00:09:06.208 Namespace Write Protected: No 00:09:06.208 Number of LBA Formats: 8 00:09:06.208 Current LBA Format: LBA Format #04 00:09:06.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:06.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:06.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:06.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:06.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:06.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:06.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:06.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:06.208 00:09:06.208 NVM Specific Namespace Data 00:09:06.208 =========================== 00:09:06.208 Logical Block Storage Tag Mask: 0 00:09:06.208 Protection Information Capabilities: 00:09:06.208 16b Guard Protection Information Storage Tag Support: No 00:09:06.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:06.208 Storage Tag Check Read Support: No 00:09:06.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Namespace ID:3 00:09:06.208 Error Recovery Timeout: Unlimited 00:09:06.208 Command Set Identifier: NVM (00h) 00:09:06.208 Deallocate: Supported 00:09:06.208 Deallocated/Unwritten Error: Supported 00:09:06.208 Deallocated Read Value: All 0x00 00:09:06.208 Deallocate in Write Zeroes: Not Supported 00:09:06.208 Deallocated Guard Field: 0xFFFF 00:09:06.208 Flush: Supported 00:09:06.208 Reservation: Not Supported 00:09:06.208 Namespace Sharing Capabilities: Private 00:09:06.208 Size (in LBAs): 1048576 (4GiB) 00:09:06.208 Capacity (in LBAs): 1048576 (4GiB) 00:09:06.208 Utilization (in LBAs): 1048576 (4GiB) 00:09:06.208 Thin Provisioning: Not Supported 00:09:06.208 Per-NS Atomic Units: No 00:09:06.208 Maximum Single Source Range Length: 128 00:09:06.208 Maximum Copy Length: 128 00:09:06.208 Maximum Source Range Count: 128 00:09:06.208 NGUID/EUI64 Never Reused: No 00:09:06.208 Namespace Write Protected: No 00:09:06.208 Number of LBA Formats: 8 00:09:06.208 Current LBA Format: LBA Format #04 00:09:06.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:06.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:06.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:06.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:06.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:06.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:06.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:06.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:06.208 00:09:06.208 NVM Specific Namespace Data 00:09:06.208 =========================== 00:09:06.208 Logical Block Storage Tag Mask: 0 00:09:06.208 Protection Information Capabilities: 00:09:06.208 16b Guard Protection Information Storage Tag Support: No 00:09:06.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:06.208 Storage Tag Check Read Support: No 00:09:06.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.208 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:06.208 04:55:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:06.471 ===================================================== 00:09:06.471 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:06.471 ===================================================== 00:09:06.471 Controller Capabilities/Features 00:09:06.471 ================================ 00:09:06.471 Vendor ID: 1b36 00:09:06.471 Subsystem Vendor ID: 1af4 00:09:06.471 Serial Number: 12343 00:09:06.471 Model Number: QEMU NVMe Ctrl 00:09:06.471 Firmware Version: 8.0.0 00:09:06.471 Recommended Arb Burst: 6 00:09:06.471 IEEE OUI Identifier: 00 54 52 00:09:06.471 Multi-path I/O 00:09:06.471 May have multiple subsystem ports: No 00:09:06.471 May have multiple controllers: Yes 00:09:06.471 Associated with SR-IOV VF: No 00:09:06.471 Max Data Transfer Size: 524288 00:09:06.471 Max Number of Namespaces: 256 00:09:06.471 Max Number of I/O Queues: 64 00:09:06.471 NVMe Specification Version (VS): 1.4 00:09:06.471 NVMe Specification Version (Identify): 1.4 00:09:06.471 Maximum Queue Entries: 2048 00:09:06.471 Contiguous Queues Required: Yes 00:09:06.471 Arbitration Mechanisms Supported 00:09:06.471 Weighted Round Robin: Not Supported 00:09:06.471 Vendor Specific: Not Supported 00:09:06.471 Reset Timeout: 7500 ms 00:09:06.471 Doorbell Stride: 4 bytes 00:09:06.471 NVM Subsystem Reset: Not Supported 00:09:06.471 Command Sets Supported 00:09:06.471 NVM Command Set: Supported 00:09:06.471 Boot Partition: Not Supported 00:09:06.471 Memory Page Size Minimum: 4096 bytes 00:09:06.471 Memory Page Size Maximum: 65536 bytes 00:09:06.471 Persistent Memory Region: Not Supported 00:09:06.471 Optional Asynchronous Events Supported 00:09:06.471 Namespace Attribute Notices: Supported 00:09:06.471 Firmware Activation Notices: Not Supported 00:09:06.471 ANA Change Notices: Not Supported 00:09:06.471 PLE Aggregate Log Change Notices: Not Supported 00:09:06.471 LBA Status Info Alert Notices: Not Supported 00:09:06.471 EGE Aggregate Log Change Notices: Not Supported 00:09:06.471 Normal NVM Subsystem Shutdown event: Not Supported 00:09:06.471 Zone Descriptor Change Notices: Not Supported 00:09:06.471 Discovery Log Change Notices: Not Supported 00:09:06.471 Controller Attributes 00:09:06.471 128-bit Host Identifier: Not Supported 00:09:06.471 Non-Operational Permissive Mode: Not Supported 00:09:06.471 NVM Sets: Not Supported 00:09:06.471 Read Recovery Levels: Not Supported 00:09:06.471 Endurance Groups: Supported 00:09:06.471 Predictable Latency Mode: Not Supported 00:09:06.471 Traffic Based Keep ALive: Not Supported 00:09:06.471 Namespace Granularity: Not Supported 00:09:06.471 SQ Associations: Not Supported 00:09:06.471 UUID List: Not Supported 00:09:06.471 Multi-Domain Subsystem: Not Supported 00:09:06.471 Fixed Capacity Management: Not Supported 00:09:06.471 Variable Capacity Management: Not Supported 00:09:06.471 Delete Endurance Group: Not Supported 00:09:06.471 Delete NVM Set: Not Supported 00:09:06.471 Extended LBA Formats Supported: Supported 00:09:06.471 Flexible Data Placement Supported: Supported 00:09:06.471 00:09:06.471 Controller Memory Buffer Support 00:09:06.471 ================================ 00:09:06.471 Supported: No 00:09:06.471 00:09:06.471 Persistent Memory Region Support 00:09:06.471 ================================ 00:09:06.471 Supported: No 00:09:06.471 00:09:06.471 Admin Command Set Attributes 00:09:06.471 ============================ 00:09:06.471 Security Send/Receive: Not Supported 00:09:06.471 Format NVM: Supported 00:09:06.471 Firmware Activate/Download: Not Supported 00:09:06.471 Namespace Management: Supported 00:09:06.471 Device Self-Test: Not Supported 00:09:06.471 Directives: Supported 00:09:06.471 NVMe-MI: Not Supported 00:09:06.471 Virtualization Management: Not Supported 00:09:06.471 Doorbell Buffer Config: Supported 00:09:06.471 Get LBA Status Capability: Not Supported 00:09:06.471 Command & Feature Lockdown Capability: Not Supported 00:09:06.471 Abort Command Limit: 4 00:09:06.471 Async Event Request Limit: 4 00:09:06.471 Number of Firmware Slots: N/A 00:09:06.471 Firmware Slot 1 Read-Only: N/A 00:09:06.471 Firmware Activation Without Reset: N/A 00:09:06.471 Multiple Update Detection Support: N/A 00:09:06.471 Firmware Update Granularity: No Information Provided 00:09:06.471 Per-Namespace SMART Log: Yes 00:09:06.471 Asymmetric Namespace Access Log Page: Not Supported 00:09:06.471 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:06.471 Command Effects Log Page: Supported 00:09:06.471 Get Log Page Extended Data: Supported 00:09:06.471 Telemetry Log Pages: Not Supported 00:09:06.471 Persistent Event Log Pages: Not Supported 00:09:06.471 Supported Log Pages Log Page: May Support 00:09:06.471 Commands Supported & Effects Log Page: Not Supported 00:09:06.471 Feature Identifiers & Effects Log Page:May Support 00:09:06.471 NVMe-MI Commands & Effects Log Page: May Support 00:09:06.471 Data Area 4 for Telemetry Log: Not Supported 00:09:06.471 Error Log Page Entries Supported: 1 00:09:06.471 Keep Alive: Not Supported 00:09:06.471 00:09:06.471 NVM Command Set Attributes 00:09:06.471 ========================== 00:09:06.471 Submission Queue Entry Size 00:09:06.471 Max: 64 00:09:06.471 Min: 64 00:09:06.471 Completion Queue Entry Size 00:09:06.471 Max: 16 00:09:06.471 Min: 16 00:09:06.471 Number of Namespaces: 256 00:09:06.471 Compare Command: Supported 00:09:06.471 Write Uncorrectable Command: Not Supported 00:09:06.471 Dataset Management Command: Supported 00:09:06.471 Write Zeroes Command: Supported 00:09:06.471 Set Features Save Field: Supported 00:09:06.472 Reservations: Not Supported 00:09:06.472 Timestamp: Supported 00:09:06.472 Copy: Supported 00:09:06.472 Volatile Write Cache: Present 00:09:06.472 Atomic Write Unit (Normal): 1 00:09:06.472 Atomic Write Unit (PFail): 1 00:09:06.472 Atomic Compare & Write Unit: 1 00:09:06.472 Fused Compare & Write: Not Supported 00:09:06.472 Scatter-Gather List 00:09:06.472 SGL Command Set: Supported 00:09:06.472 SGL Keyed: Not Supported 00:09:06.472 SGL Bit Bucket Descriptor: Not Supported 00:09:06.472 SGL Metadata Pointer: Not Supported 00:09:06.472 Oversized SGL: Not Supported 00:09:06.472 SGL Metadata Address: Not Supported 00:09:06.472 SGL Offset: Not Supported 00:09:06.472 Transport SGL Data Block: Not Supported 00:09:06.472 Replay Protected Memory Block: Not Supported 00:09:06.472 00:09:06.472 Firmware Slot Information 00:09:06.472 ========================= 00:09:06.472 Active slot: 1 00:09:06.472 Slot 1 Firmware Revision: 1.0 00:09:06.472 00:09:06.472 00:09:06.472 Commands Supported and Effects 00:09:06.472 ============================== 00:09:06.472 Admin Commands 00:09:06.472 -------------- 00:09:06.472 Delete I/O Submission Queue (00h): Supported 00:09:06.472 Create I/O Submission Queue (01h): Supported 00:09:06.472 Get Log Page (02h): Supported 00:09:06.472 Delete I/O Completion Queue (04h): Supported 00:09:06.472 Create I/O Completion Queue (05h): Supported 00:09:06.472 Identify (06h): Supported 00:09:06.472 Abort (08h): Supported 00:09:06.472 Set Features (09h): Supported 00:09:06.472 Get Features (0Ah): Supported 00:09:06.472 Asynchronous Event Request (0Ch): Supported 00:09:06.472 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:06.472 Directive Send (19h): Supported 00:09:06.472 Directive Receive (1Ah): Supported 00:09:06.472 Virtualization Management (1Ch): Supported 00:09:06.472 Doorbell Buffer Config (7Ch): Supported 00:09:06.472 Format NVM (80h): Supported LBA-Change 00:09:06.472 I/O Commands 00:09:06.472 ------------ 00:09:06.472 Flush (00h): Supported LBA-Change 00:09:06.472 Write (01h): Supported LBA-Change 00:09:06.472 Read (02h): Supported 00:09:06.472 Compare (05h): Supported 00:09:06.472 Write Zeroes (08h): Supported LBA-Change 00:09:06.472 Dataset Management (09h): Supported LBA-Change 00:09:06.472 Unknown (0Ch): Supported 00:09:06.472 Unknown (12h): Supported 00:09:06.472 Copy (19h): Supported LBA-Change 00:09:06.472 Unknown (1Dh): Supported LBA-Change 00:09:06.472 00:09:06.472 Error Log 00:09:06.472 ========= 00:09:06.472 00:09:06.472 Arbitration 00:09:06.472 =========== 00:09:06.472 Arbitration Burst: no limit 00:09:06.472 00:09:06.472 Power Management 00:09:06.472 ================ 00:09:06.472 Number of Power States: 1 00:09:06.472 Current Power State: Power State #0 00:09:06.472 Power State #0: 00:09:06.472 Max Power: 25.00 W 00:09:06.472 Non-Operational State: Operational 00:09:06.472 Entry Latency: 16 microseconds 00:09:06.472 Exit Latency: 4 microseconds 00:09:06.472 Relative Read Throughput: 0 00:09:06.472 Relative Read Latency: 0 00:09:06.472 Relative Write Throughput: 0 00:09:06.472 Relative Write Latency: 0 00:09:06.472 Idle Power: Not Reported 00:09:06.472 Active Power: Not Reported 00:09:06.472 Non-Operational Permissive Mode: Not Supported 00:09:06.472 00:09:06.472 Health Information 00:09:06.472 ================== 00:09:06.472 Critical Warnings: 00:09:06.472 Available Spare Space: OK 00:09:06.472 Temperature: OK 00:09:06.472 Device Reliability: OK 00:09:06.472 Read Only: No 00:09:06.472 Volatile Memory Backup: OK 00:09:06.472 Current Temperature: 323 Kelvin (50 Celsius) 00:09:06.472 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:06.472 Available Spare: 0% 00:09:06.472 Available Spare Threshold: 0% 00:09:06.472 Life Percentage Used: 0% 00:09:06.472 Data Units Read: 797 00:09:06.472 Data Units Written: 726 00:09:06.472 Host Read Commands: 36421 00:09:06.472 Host Write Commands: 35844 00:09:06.472 Controller Busy Time: 0 minutes 00:09:06.472 Power Cycles: 0 00:09:06.472 Power On Hours: 0 hours 00:09:06.472 Unsafe Shutdowns: 0 00:09:06.472 Unrecoverable Media Errors: 0 00:09:06.472 Lifetime Error Log Entries: 0 00:09:06.472 Warning Temperature Time: 0 minutes 00:09:06.472 Critical Temperature Time: 0 minutes 00:09:06.472 00:09:06.472 Number of Queues 00:09:06.472 ================ 00:09:06.472 Number of I/O Submission Queues: 64 00:09:06.472 Number of I/O Completion Queues: 64 00:09:06.472 00:09:06.472 ZNS Specific Controller Data 00:09:06.472 ============================ 00:09:06.472 Zone Append Size Limit: 0 00:09:06.472 00:09:06.472 00:09:06.472 Active Namespaces 00:09:06.472 ================= 00:09:06.472 Namespace ID:1 00:09:06.472 Error Recovery Timeout: Unlimited 00:09:06.472 Command Set Identifier: NVM (00h) 00:09:06.472 Deallocate: Supported 00:09:06.472 Deallocated/Unwritten Error: Supported 00:09:06.472 Deallocated Read Value: All 0x00 00:09:06.472 Deallocate in Write Zeroes: Not Supported 00:09:06.472 Deallocated Guard Field: 0xFFFF 00:09:06.472 Flush: Supported 00:09:06.472 Reservation: Not Supported 00:09:06.472 Namespace Sharing Capabilities: Multiple Controllers 00:09:06.472 Size (in LBAs): 262144 (1GiB) 00:09:06.472 Capacity (in LBAs): 262144 (1GiB) 00:09:06.472 Utilization (in LBAs): 262144 (1GiB) 00:09:06.472 Thin Provisioning: Not Supported 00:09:06.472 Per-NS Atomic Units: No 00:09:06.472 Maximum Single Source Range Length: 128 00:09:06.472 Maximum Copy Length: 128 00:09:06.472 Maximum Source Range Count: 128 00:09:06.472 NGUID/EUI64 Never Reused: No 00:09:06.472 Namespace Write Protected: No 00:09:06.472 Endurance group ID: 1 00:09:06.472 Number of LBA Formats: 8 00:09:06.472 Current LBA Format: LBA Format #04 00:09:06.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:06.472 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:06.472 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:06.472 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:06.472 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:06.472 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:06.472 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:06.472 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:06.472 00:09:06.472 Get Feature FDP: 00:09:06.472 ================ 00:09:06.472 Enabled: Yes 00:09:06.472 FDP configuration index: 0 00:09:06.472 00:09:06.472 FDP configurations log page 00:09:06.472 =========================== 00:09:06.472 Number of FDP configurations: 1 00:09:06.472 Version: 0 00:09:06.472 Size: 112 00:09:06.472 FDP Configuration Descriptor: 0 00:09:06.472 Descriptor Size: 96 00:09:06.472 Reclaim Group Identifier format: 2 00:09:06.472 FDP Volatile Write Cache: Not Present 00:09:06.472 FDP Configuration: Valid 00:09:06.472 Vendor Specific Size: 0 00:09:06.472 Number of Reclaim Groups: 2 00:09:06.472 Number of Recalim Unit Handles: 8 00:09:06.472 Max Placement Identifiers: 128 00:09:06.472 Number of Namespaces Suppprted: 256 00:09:06.472 Reclaim unit Nominal Size: 6000000 bytes 00:09:06.472 Estimated Reclaim Unit Time Limit: Not Reported 00:09:06.472 RUH Desc #000: RUH Type: Initially Isolated 00:09:06.472 RUH Desc #001: RUH Type: Initially Isolated 00:09:06.472 RUH Desc #002: RUH Type: Initially Isolated 00:09:06.472 RUH Desc #003: RUH Type: Initially Isolated 00:09:06.472 RUH Desc #004: RUH Type: Initially Isolated 00:09:06.472 RUH Desc #005: RUH Type: Initially Isolated 00:09:06.472 RUH Desc #006: RUH Type: Initially Isolated 00:09:06.472 RUH Desc #007: RUH Type: Initially Isolated 00:09:06.472 00:09:06.472 FDP reclaim unit handle usage log page 00:09:06.472 ====================================== 00:09:06.472 Number of Reclaim Unit Handles: 8 00:09:06.472 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:06.472 RUH Usage Desc #001: RUH Attributes: Unused 00:09:06.472 RUH Usage Desc #002: RUH Attributes: Unused 00:09:06.472 RUH Usage Desc #003: RUH Attributes: Unused 00:09:06.472 RUH Usage Desc #004: RUH Attributes: Unused 00:09:06.472 RUH Usage Desc #005: RUH Attributes: Unused 00:09:06.472 RUH Usage Desc #006: RUH Attributes: Unused 00:09:06.472 RUH Usage Desc #007: RUH Attributes: Unused 00:09:06.472 00:09:06.472 FDP statistics log page 00:09:06.472 ======================= 00:09:06.472 Host bytes with metadata written: 457023488 00:09:06.472 Media bytes with metadata written: 457068544 00:09:06.472 Media bytes erased: 0 00:09:06.472 00:09:06.472 FDP events log page 00:09:06.472 =================== 00:09:06.472 Number of FDP events: 0 00:09:06.473 00:09:06.473 NVM Specific Namespace Data 00:09:06.473 =========================== 00:09:06.473 Logical Block Storage Tag Mask: 0 00:09:06.473 Protection Information Capabilities: 00:09:06.473 16b Guard Protection Information Storage Tag Support: No 00:09:06.473 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:06.473 Storage Tag Check Read Support: No 00:09:06.473 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:06.473 00:09:06.473 real 0m1.143s 00:09:06.473 user 0m0.402s 00:09:06.473 sys 0m0.533s 00:09:06.473 ************************************ 00:09:06.473 END TEST nvme_identify 00:09:06.473 ************************************ 00:09:06.473 04:55:23 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.473 04:55:23 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:06.473 04:55:23 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:06.473 04:55:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.473 04:55:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.473 04:55:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.473 ************************************ 00:09:06.473 START TEST nvme_perf 00:09:06.473 ************************************ 00:09:06.473 04:55:23 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:06.473 04:55:23 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:07.858 Initializing NVMe Controllers 00:09:07.858 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:07.858 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:07.858 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:07.858 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:07.858 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:07.858 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:07.858 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:07.858 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:07.858 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:07.858 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:07.858 Initialization complete. Launching workers. 00:09:07.858 ======================================================== 00:09:07.858 Latency(us) 00:09:07.858 Device Information : IOPS MiB/s Average min max 00:09:07.858 PCIE (0000:00:10.0) NSID 1 from core 0: 14246.09 166.95 8986.68 4516.11 29281.15 00:09:07.858 PCIE (0000:00:11.0) NSID 1 from core 0: 14246.09 166.95 8978.11 4364.67 29012.80 00:09:07.858 PCIE (0000:00:13.0) NSID 1 from core 0: 14246.09 166.95 8969.33 3887.81 29069.26 00:09:07.858 PCIE (0000:00:12.0) NSID 1 from core 0: 14246.09 166.95 8960.32 3696.46 28607.96 00:09:07.858 PCIE (0000:00:12.0) NSID 2 from core 0: 14246.09 166.95 8951.32 3477.86 28582.22 00:09:07.858 PCIE (0000:00:12.0) NSID 3 from core 0: 14246.09 166.95 8942.34 3252.52 27835.72 00:09:07.858 ======================================================== 00:09:07.858 Total : 85476.52 1001.68 8964.68 3252.52 29281.15 00:09:07.858 00:09:07.858 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:07.858 ================================================================================= 00:09:07.858 1.00000% : 5948.652us 00:09:07.858 10.00000% : 6150.302us 00:09:07.858 25.00000% : 6402.363us 00:09:07.858 50.00000% : 6805.662us 00:09:07.858 75.00000% : 12502.252us 00:09:07.858 90.00000% : 15022.868us 00:09:07.858 95.00000% : 16333.588us 00:09:07.858 98.00000% : 18854.203us 00:09:07.858 99.00000% : 19559.975us 00:09:07.858 99.50000% : 21173.169us 00:09:07.858 99.90000% : 29037.489us 00:09:07.858 99.99000% : 29239.138us 00:09:07.858 99.99900% : 29440.788us 00:09:07.858 99.99990% : 29440.788us 00:09:07.858 99.99999% : 29440.788us 00:09:07.858 00:09:07.858 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:07.858 ================================================================================= 00:09:07.858 1.00000% : 6049.477us 00:09:07.858 10.00000% : 6200.714us 00:09:07.858 25.00000% : 6427.569us 00:09:07.858 50.00000% : 6805.662us 00:09:07.858 75.00000% : 12552.665us 00:09:07.858 90.00000% : 15022.868us 00:09:07.858 95.00000% : 16131.938us 00:09:07.858 98.00000% : 18854.203us 00:09:07.858 99.00000% : 19761.625us 00:09:07.858 99.50000% : 21072.345us 00:09:07.858 99.90000% : 28835.840us 00:09:07.858 99.99000% : 29037.489us 00:09:07.858 99.99900% : 29037.489us 00:09:07.858 99.99990% : 29037.489us 00:09:07.858 99.99999% : 29037.489us 00:09:07.858 00:09:07.858 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:07.858 ================================================================================= 00:09:07.858 1.00000% : 6024.271us 00:09:07.858 10.00000% : 6200.714us 00:09:07.858 25.00000% : 6402.363us 00:09:07.858 50.00000% : 6805.662us 00:09:07.858 75.00000% : 12502.252us 00:09:07.858 90.00000% : 14922.043us 00:09:07.858 95.00000% : 16535.237us 00:09:07.858 98.00000% : 18955.028us 00:09:07.858 99.00000% : 20366.572us 00:09:07.858 99.50000% : 21173.169us 00:09:07.858 99.90000% : 28835.840us 00:09:07.858 99.99000% : 29239.138us 00:09:07.858 99.99900% : 29239.138us 00:09:07.858 99.99990% : 29239.138us 00:09:07.858 99.99999% : 29239.138us 00:09:07.858 00:09:07.858 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:07.858 ================================================================================= 00:09:07.858 1.00000% : 5999.065us 00:09:07.858 10.00000% : 6200.714us 00:09:07.858 25.00000% : 6427.569us 00:09:07.858 50.00000% : 6805.662us 00:09:07.858 75.00000% : 12552.665us 00:09:07.858 90.00000% : 14922.043us 00:09:07.858 95.00000% : 16232.763us 00:09:07.858 98.00000% : 19257.502us 00:09:07.858 99.00000% : 20669.046us 00:09:07.858 99.50000% : 21374.818us 00:09:07.858 99.90000% : 28432.542us 00:09:07.858 99.99000% : 28634.191us 00:09:07.858 99.99900% : 28634.191us 00:09:07.858 99.99990% : 28634.191us 00:09:07.858 99.99999% : 28634.191us 00:09:07.858 00:09:07.858 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:07.858 ================================================================================= 00:09:07.858 1.00000% : 5999.065us 00:09:07.858 10.00000% : 6200.714us 00:09:07.858 25.00000% : 6402.363us 00:09:07.858 50.00000% : 6805.662us 00:09:07.858 75.00000% : 12502.252us 00:09:07.858 90.00000% : 14922.043us 00:09:07.858 95.00000% : 16333.588us 00:09:07.858 98.00000% : 18854.203us 00:09:07.858 99.00000% : 20669.046us 00:09:07.858 99.50000% : 21576.468us 00:09:07.858 99.90000% : 28432.542us 00:09:07.858 99.99000% : 28634.191us 00:09:07.858 99.99900% : 28634.191us 00:09:07.858 99.99990% : 28634.191us 00:09:07.858 99.99999% : 28634.191us 00:09:07.858 00:09:07.858 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:07.858 ================================================================================= 00:09:07.858 1.00000% : 5999.065us 00:09:07.858 10.00000% : 6200.714us 00:09:07.858 25.00000% : 6402.363us 00:09:07.858 50.00000% : 6755.249us 00:09:07.858 75.00000% : 12502.252us 00:09:07.858 90.00000% : 14922.043us 00:09:07.858 95.00000% : 16333.588us 00:09:07.858 98.00000% : 19055.852us 00:09:07.858 99.00000% : 20769.871us 00:09:07.858 99.50000% : 21475.643us 00:09:07.858 99.90000% : 27625.945us 00:09:07.858 99.99000% : 27827.594us 00:09:07.858 99.99900% : 28029.243us 00:09:07.859 99.99990% : 28029.243us 00:09:07.859 99.99999% : 28029.243us 00:09:07.859 00:09:07.859 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:07.859 ============================================================================== 00:09:07.859 Range in us Cumulative IO count 00:09:07.859 4511.902 - 4537.108: 0.0140% ( 2) 00:09:07.859 4537.108 - 4562.314: 0.0350% ( 3) 00:09:07.859 4562.314 - 4587.520: 0.0420% ( 1) 00:09:07.859 4587.520 - 4612.726: 0.0561% ( 2) 00:09:07.859 4612.726 - 4637.932: 0.0631% ( 1) 00:09:07.859 4637.932 - 4663.138: 0.0841% ( 3) 00:09:07.859 4663.138 - 4688.345: 0.0911% ( 1) 00:09:07.859 4688.345 - 4713.551: 0.1051% ( 2) 00:09:07.859 4713.551 - 4738.757: 0.1191% ( 2) 00:09:07.859 4738.757 - 4763.963: 0.1331% ( 2) 00:09:07.859 4763.963 - 4789.169: 0.1471% ( 2) 00:09:07.859 4789.169 - 4814.375: 0.1612% ( 2) 00:09:07.859 4814.375 - 4839.582: 0.1752% ( 2) 00:09:07.859 4839.582 - 4864.788: 0.1892% ( 2) 00:09:07.859 4864.788 - 4889.994: 0.2032% ( 2) 00:09:07.859 4889.994 - 4915.200: 0.2102% ( 1) 00:09:07.859 4915.200 - 4940.406: 0.2242% ( 2) 00:09:07.859 4940.406 - 4965.612: 0.2382% ( 2) 00:09:07.859 4965.612 - 4990.818: 0.2522% ( 2) 00:09:07.859 4990.818 - 5016.025: 0.2663% ( 2) 00:09:07.859 5016.025 - 5041.231: 0.2803% ( 2) 00:09:07.859 5041.231 - 5066.437: 0.2873% ( 1) 00:09:07.859 5066.437 - 5091.643: 0.3153% ( 4) 00:09:07.859 5116.849 - 5142.055: 0.3293% ( 2) 00:09:07.859 5142.055 - 5167.262: 0.3433% ( 2) 00:09:07.859 5167.262 - 5192.468: 0.3573% ( 2) 00:09:07.859 5192.468 - 5217.674: 0.3714% ( 2) 00:09:07.859 5217.674 - 5242.880: 0.3854% ( 2) 00:09:07.859 5242.880 - 5268.086: 0.3924% ( 1) 00:09:07.859 5268.086 - 5293.292: 0.4064% ( 2) 00:09:07.859 5293.292 - 5318.498: 0.4204% ( 2) 00:09:07.859 5318.498 - 5343.705: 0.4344% ( 2) 00:09:07.859 5343.705 - 5368.911: 0.4484% ( 2) 00:09:07.859 5873.034 - 5898.240: 0.5045% ( 8) 00:09:07.859 5898.240 - 5923.446: 0.7357% ( 33) 00:09:07.859 5923.446 - 5948.652: 1.1071% ( 53) 00:09:07.859 5948.652 - 5973.858: 1.9829% ( 125) 00:09:07.859 5973.858 - 5999.065: 2.9779% ( 142) 00:09:07.859 5999.065 - 6024.271: 4.1480% ( 167) 00:09:07.859 6024.271 - 6049.477: 5.6124% ( 209) 00:09:07.859 6049.477 - 6074.683: 7.0488% ( 205) 00:09:07.859 6074.683 - 6099.889: 8.5902% ( 220) 00:09:07.859 6099.889 - 6125.095: 9.9005% ( 187) 00:09:07.859 6125.095 - 6150.302: 11.3509% ( 207) 00:09:07.859 6150.302 - 6175.508: 12.7452% ( 199) 00:09:07.859 6175.508 - 6200.714: 14.0625% ( 188) 00:09:07.859 6200.714 - 6225.920: 15.5619% ( 214) 00:09:07.859 6225.920 - 6251.126: 17.1174% ( 222) 00:09:07.859 6251.126 - 6276.332: 18.4767% ( 194) 00:09:07.859 6276.332 - 6301.538: 20.0603% ( 226) 00:09:07.859 6301.538 - 6326.745: 21.5807% ( 217) 00:09:07.859 6326.745 - 6351.951: 23.0311% ( 207) 00:09:07.859 6351.951 - 6377.157: 24.5866% ( 222) 00:09:07.859 6377.157 - 6402.363: 26.1001% ( 216) 00:09:07.859 6402.363 - 6427.569: 27.6275% ( 218) 00:09:07.859 6427.569 - 6452.775: 29.1059% ( 211) 00:09:07.859 6452.775 - 6503.188: 32.1118% ( 429) 00:09:07.859 6503.188 - 6553.600: 35.0897% ( 425) 00:09:07.859 6553.600 - 6604.012: 38.1376% ( 435) 00:09:07.859 6604.012 - 6654.425: 41.2206% ( 440) 00:09:07.859 6654.425 - 6704.837: 44.1634% ( 420) 00:09:07.859 6704.837 - 6755.249: 47.2393% ( 439) 00:09:07.859 6755.249 - 6805.662: 50.1682% ( 418) 00:09:07.859 6805.662 - 6856.074: 53.2721% ( 443) 00:09:07.859 6856.074 - 6906.486: 56.2290% ( 422) 00:09:07.859 6906.486 - 6956.898: 59.1788% ( 421) 00:09:07.859 6956.898 - 7007.311: 61.9885% ( 401) 00:09:07.859 7007.311 - 7057.723: 63.8803% ( 270) 00:09:07.859 7057.723 - 7108.135: 64.8893% ( 144) 00:09:07.859 7108.135 - 7158.548: 65.2326% ( 49) 00:09:07.859 7158.548 - 7208.960: 65.4638% ( 33) 00:09:07.859 7208.960 - 7259.372: 65.6740% ( 30) 00:09:07.859 7259.372 - 7309.785: 65.8072% ( 19) 00:09:07.859 7309.785 - 7360.197: 65.9403% ( 19) 00:09:07.859 7360.197 - 7410.609: 66.0174% ( 11) 00:09:07.859 7410.609 - 7461.022: 66.0874% ( 10) 00:09:07.859 7461.022 - 7511.434: 66.1645% ( 11) 00:09:07.859 7511.434 - 7561.846: 66.2486% ( 12) 00:09:07.859 7561.846 - 7612.258: 66.3327% ( 12) 00:09:07.859 7612.258 - 7662.671: 66.4378% ( 15) 00:09:07.859 7662.671 - 7713.083: 66.5359% ( 14) 00:09:07.859 7713.083 - 7763.495: 66.6690% ( 19) 00:09:07.859 7763.495 - 7813.908: 66.7951% ( 18) 00:09:07.859 7813.908 - 7864.320: 66.9212% ( 18) 00:09:07.859 7864.320 - 7914.732: 67.0263% ( 15) 00:09:07.859 7914.732 - 7965.145: 67.1455% ( 17) 00:09:07.859 7965.145 - 8015.557: 67.2856% ( 20) 00:09:07.859 8015.557 - 8065.969: 67.4117% ( 18) 00:09:07.859 8065.969 - 8116.382: 67.5308% ( 17) 00:09:07.859 8116.382 - 8166.794: 67.6710% ( 20) 00:09:07.859 8166.794 - 8217.206: 67.7971% ( 18) 00:09:07.859 8217.206 - 8267.618: 67.9092% ( 16) 00:09:07.859 8267.618 - 8318.031: 68.0143% ( 15) 00:09:07.859 8318.031 - 8368.443: 68.1334% ( 17) 00:09:07.859 8368.443 - 8418.855: 68.2035% ( 10) 00:09:07.859 8418.855 - 8469.268: 68.3016% ( 14) 00:09:07.859 8469.268 - 8519.680: 68.3716% ( 10) 00:09:07.859 8519.680 - 8570.092: 68.4627% ( 13) 00:09:07.859 8570.092 - 8620.505: 68.5818% ( 17) 00:09:07.859 8620.505 - 8670.917: 68.6799% ( 14) 00:09:07.859 8670.917 - 8721.329: 68.8061% ( 18) 00:09:07.859 8721.329 - 8771.742: 68.8621% ( 8) 00:09:07.859 8771.742 - 8822.154: 68.9041% ( 6) 00:09:07.859 8822.154 - 8872.566: 68.9392% ( 5) 00:09:07.859 8872.566 - 8922.978: 68.9812% ( 6) 00:09:07.859 8922.978 - 8973.391: 69.0513% ( 10) 00:09:07.859 8973.391 - 9023.803: 69.1214% ( 10) 00:09:07.859 9023.803 - 9074.215: 69.1704% ( 7) 00:09:07.859 9074.215 - 9124.628: 69.2475% ( 11) 00:09:07.859 9124.628 - 9175.040: 69.3105% ( 9) 00:09:07.859 9175.040 - 9225.452: 69.3666% ( 8) 00:09:07.859 9225.452 - 9275.865: 69.4156% ( 7) 00:09:07.859 9275.865 - 9326.277: 69.4787% ( 9) 00:09:07.859 9326.277 - 9376.689: 69.5698% ( 13) 00:09:07.859 9376.689 - 9427.102: 69.6609% ( 13) 00:09:07.859 9427.102 - 9477.514: 69.7309% ( 10) 00:09:07.859 9477.514 - 9527.926: 69.8220% ( 13) 00:09:07.859 9527.926 - 9578.338: 69.8851% ( 9) 00:09:07.859 9578.338 - 9628.751: 69.9482% ( 9) 00:09:07.859 9628.751 - 9679.163: 69.9972% ( 7) 00:09:07.859 9679.163 - 9729.575: 70.0392% ( 6) 00:09:07.859 9729.575 - 9779.988: 70.0743% ( 5) 00:09:07.859 9779.988 - 9830.400: 70.1233% ( 7) 00:09:07.859 9830.400 - 9880.812: 70.1654% ( 6) 00:09:07.859 9880.812 - 9931.225: 70.2074% ( 6) 00:09:07.860 9931.225 - 9981.637: 70.2354% ( 4) 00:09:07.860 9981.637 - 10032.049: 70.2775% ( 6) 00:09:07.860 10032.049 - 10082.462: 70.3335% ( 8) 00:09:07.860 10082.462 - 10132.874: 70.3966% ( 9) 00:09:07.860 10132.874 - 10183.286: 70.4526% ( 8) 00:09:07.860 10183.286 - 10233.698: 70.5087% ( 8) 00:09:07.860 10233.698 - 10284.111: 70.5717% ( 9) 00:09:07.860 10284.111 - 10334.523: 70.6348% ( 9) 00:09:07.860 10334.523 - 10384.935: 70.7189% ( 12) 00:09:07.860 10384.935 - 10435.348: 70.7890% ( 10) 00:09:07.860 10435.348 - 10485.760: 70.8380% ( 7) 00:09:07.860 10485.760 - 10536.172: 70.8871% ( 7) 00:09:07.860 10536.172 - 10586.585: 70.9361% ( 7) 00:09:07.860 10586.585 - 10636.997: 70.9711% ( 5) 00:09:07.860 10636.997 - 10687.409: 71.0482% ( 11) 00:09:07.860 10687.409 - 10737.822: 71.1393% ( 13) 00:09:07.860 10737.822 - 10788.234: 71.2094% ( 10) 00:09:07.860 10788.234 - 10838.646: 71.2864% ( 11) 00:09:07.860 10838.646 - 10889.058: 71.3635% ( 11) 00:09:07.860 10889.058 - 10939.471: 71.4336% ( 10) 00:09:07.860 10939.471 - 10989.883: 71.5387% ( 15) 00:09:07.860 10989.883 - 11040.295: 71.6158% ( 11) 00:09:07.860 11040.295 - 11090.708: 71.7279% ( 16) 00:09:07.860 11090.708 - 11141.120: 71.8260% ( 14) 00:09:07.860 11141.120 - 11191.532: 71.9030% ( 11) 00:09:07.860 11191.532 - 11241.945: 71.9661% ( 9) 00:09:07.860 11241.945 - 11292.357: 72.0221% ( 8) 00:09:07.860 11292.357 - 11342.769: 72.0992% ( 11) 00:09:07.860 11342.769 - 11393.182: 72.1623% ( 9) 00:09:07.860 11393.182 - 11443.594: 72.2183% ( 8) 00:09:07.860 11443.594 - 11494.006: 72.2534% ( 5) 00:09:07.860 11494.006 - 11544.418: 72.3515% ( 14) 00:09:07.860 11544.418 - 11594.831: 72.4636% ( 16) 00:09:07.860 11594.831 - 11645.243: 72.4986% ( 5) 00:09:07.860 11645.243 - 11695.655: 72.6247% ( 18) 00:09:07.860 11695.655 - 11746.068: 72.7929% ( 24) 00:09:07.860 11746.068 - 11796.480: 72.8840% ( 13) 00:09:07.860 11796.480 - 11846.892: 73.0311% ( 21) 00:09:07.860 11846.892 - 11897.305: 73.1572% ( 18) 00:09:07.860 11897.305 - 11947.717: 73.3044% ( 21) 00:09:07.860 11947.717 - 11998.129: 73.4445% ( 20) 00:09:07.860 11998.129 - 12048.542: 73.6197% ( 25) 00:09:07.860 12048.542 - 12098.954: 73.7458% ( 18) 00:09:07.860 12098.954 - 12149.366: 73.8439% ( 14) 00:09:07.860 12149.366 - 12199.778: 73.9770% ( 19) 00:09:07.860 12199.778 - 12250.191: 74.0961% ( 17) 00:09:07.860 12250.191 - 12300.603: 74.2152% ( 17) 00:09:07.860 12300.603 - 12351.015: 74.4395% ( 32) 00:09:07.860 12351.015 - 12401.428: 74.6216% ( 26) 00:09:07.860 12401.428 - 12451.840: 74.8248% ( 29) 00:09:07.860 12451.840 - 12502.252: 75.0000% ( 25) 00:09:07.860 12502.252 - 12552.665: 75.2102% ( 30) 00:09:07.860 12552.665 - 12603.077: 75.4905% ( 40) 00:09:07.860 12603.077 - 12653.489: 75.8828% ( 56) 00:09:07.860 12653.489 - 12703.902: 76.1281% ( 35) 00:09:07.860 12703.902 - 12754.314: 76.4504% ( 46) 00:09:07.860 12754.314 - 12804.726: 76.7797% ( 47) 00:09:07.860 12804.726 - 12855.138: 77.0810% ( 43) 00:09:07.860 12855.138 - 12905.551: 77.3823% ( 43) 00:09:07.860 12905.551 - 13006.375: 78.0760% ( 99) 00:09:07.860 13006.375 - 13107.200: 78.9098% ( 119) 00:09:07.860 13107.200 - 13208.025: 79.5404% ( 90) 00:09:07.860 13208.025 - 13308.849: 80.2971% ( 108) 00:09:07.860 13308.849 - 13409.674: 81.0748% ( 111) 00:09:07.860 13409.674 - 13510.498: 81.9086% ( 119) 00:09:07.860 13510.498 - 13611.323: 82.5252% ( 88) 00:09:07.860 13611.323 - 13712.148: 83.1488% ( 89) 00:09:07.860 13712.148 - 13812.972: 83.8004% ( 93) 00:09:07.860 13812.972 - 13913.797: 84.4030% ( 86) 00:09:07.860 13913.797 - 14014.622: 85.2298% ( 118) 00:09:07.860 14014.622 - 14115.446: 85.8885% ( 94) 00:09:07.860 14115.446 - 14216.271: 86.5471% ( 94) 00:09:07.860 14216.271 - 14317.095: 87.1777% ( 90) 00:09:07.860 14317.095 - 14417.920: 87.6752% ( 71) 00:09:07.860 14417.920 - 14518.745: 88.2007% ( 75) 00:09:07.860 14518.745 - 14619.569: 88.6071% ( 58) 00:09:07.860 14619.569 - 14720.394: 88.9994% ( 56) 00:09:07.860 14720.394 - 14821.218: 89.3498% ( 50) 00:09:07.860 14821.218 - 14922.043: 89.7281% ( 54) 00:09:07.860 14922.043 - 15022.868: 90.1065% ( 54) 00:09:07.860 15022.868 - 15123.692: 90.4638% ( 51) 00:09:07.860 15123.692 - 15224.517: 90.7791% ( 45) 00:09:07.860 15224.517 - 15325.342: 91.0874% ( 44) 00:09:07.860 15325.342 - 15426.166: 91.4518% ( 52) 00:09:07.860 15426.166 - 15526.991: 92.0544% ( 86) 00:09:07.860 15526.991 - 15627.815: 92.4678% ( 59) 00:09:07.860 15627.815 - 15728.640: 92.9302% ( 66) 00:09:07.860 15728.640 - 15829.465: 93.4137% ( 69) 00:09:07.860 15829.465 - 15930.289: 93.7500% ( 48) 00:09:07.860 15930.289 - 16031.114: 94.1354% ( 55) 00:09:07.860 16031.114 - 16131.938: 94.4857% ( 50) 00:09:07.860 16131.938 - 16232.763: 94.8080% ( 46) 00:09:07.860 16232.763 - 16333.588: 95.2214% ( 59) 00:09:07.860 16333.588 - 16434.412: 95.4246% ( 29) 00:09:07.860 16434.412 - 16535.237: 95.7329% ( 44) 00:09:07.860 16535.237 - 16636.062: 95.9641% ( 33) 00:09:07.860 16636.062 - 16736.886: 96.2094% ( 35) 00:09:07.860 16736.886 - 16837.711: 96.3215% ( 16) 00:09:07.860 16837.711 - 16938.535: 96.4266% ( 15) 00:09:07.860 16938.535 - 17039.360: 96.4896% ( 9) 00:09:07.860 17039.360 - 17140.185: 96.5597% ( 10) 00:09:07.860 17140.185 - 17241.009: 96.6228% ( 9) 00:09:07.860 17241.009 - 17341.834: 96.6858% ( 9) 00:09:07.860 17341.834 - 17442.658: 96.7209% ( 5) 00:09:07.860 17442.658 - 17543.483: 96.7419% ( 3) 00:09:07.860 17543.483 - 17644.308: 96.7909% ( 7) 00:09:07.860 17644.308 - 17745.132: 96.8119% ( 3) 00:09:07.860 17745.132 - 17845.957: 96.8680% ( 8) 00:09:07.860 17845.957 - 17946.782: 97.0221% ( 22) 00:09:07.860 17946.782 - 18047.606: 97.0992% ( 11) 00:09:07.860 18047.606 - 18148.431: 97.1903% ( 13) 00:09:07.860 18148.431 - 18249.255: 97.3024% ( 16) 00:09:07.860 18249.255 - 18350.080: 97.4566% ( 22) 00:09:07.860 18350.080 - 18450.905: 97.6037% ( 21) 00:09:07.860 18450.905 - 18551.729: 97.6878% ( 12) 00:09:07.860 18551.729 - 18652.554: 97.8209% ( 19) 00:09:07.860 18652.554 - 18753.378: 97.9190% ( 14) 00:09:07.860 18753.378 - 18854.203: 98.0451% ( 18) 00:09:07.860 18854.203 - 18955.028: 98.1853% ( 20) 00:09:07.860 18955.028 - 19055.852: 98.3534% ( 24) 00:09:07.860 19055.852 - 19156.677: 98.5006% ( 21) 00:09:07.860 19156.677 - 19257.502: 98.6407% ( 20) 00:09:07.860 19257.502 - 19358.326: 98.7738% ( 19) 00:09:07.860 19358.326 - 19459.151: 98.9280% ( 22) 00:09:07.860 19459.151 - 19559.975: 99.0401% ( 16) 00:09:07.860 19559.975 - 19660.800: 99.1101% ( 10) 00:09:07.860 19660.800 - 19761.625: 99.1382% ( 4) 00:09:07.860 19761.625 - 19862.449: 99.1872% ( 7) 00:09:07.860 19862.449 - 19963.274: 99.2152% ( 4) 00:09:07.860 19963.274 - 20064.098: 99.2573% ( 6) 00:09:07.860 20064.098 - 20164.923: 99.2923% ( 5) 00:09:07.860 20164.923 - 20265.748: 99.3414% ( 7) 00:09:07.860 20265.748 - 20366.572: 99.3834% ( 6) 00:09:07.860 20366.572 - 20467.397: 99.4044% ( 3) 00:09:07.860 20467.397 - 20568.222: 99.4114% ( 1) 00:09:07.860 20568.222 - 20669.046: 99.4395% ( 4) 00:09:07.860 20669.046 - 20769.871: 99.4465% ( 1) 00:09:07.860 20769.871 - 20870.695: 99.4535% ( 1) 00:09:07.860 20870.695 - 20971.520: 99.4815% ( 4) 00:09:07.860 20971.520 - 21072.345: 99.4885% ( 1) 00:09:07.860 21072.345 - 21173.169: 99.5095% ( 3) 00:09:07.860 21173.169 - 21273.994: 99.5165% ( 1) 00:09:07.860 21273.994 - 21374.818: 99.5446% ( 4) 00:09:07.860 21475.643 - 21576.468: 99.5516% ( 1) 00:09:07.860 27827.594 - 28029.243: 99.5586% ( 1) 00:09:07.860 28029.243 - 28230.892: 99.6216% ( 9) 00:09:07.860 28230.892 - 28432.542: 99.6987% ( 11) 00:09:07.860 28432.542 - 28634.191: 99.7688% ( 10) 00:09:07.860 28634.191 - 28835.840: 99.8459% ( 11) 00:09:07.860 28835.840 - 29037.489: 99.9159% ( 10) 00:09:07.860 29037.489 - 29239.138: 99.9930% ( 11) 00:09:07.860 29239.138 - 29440.788: 100.0000% ( 1) 00:09:07.860 00:09:07.860 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:07.860 ============================================================================== 00:09:07.860 Range in us Cumulative IO count 00:09:07.860 4360.665 - 4385.871: 0.0140% ( 2) 00:09:07.860 4385.871 - 4411.077: 0.0350% ( 3) 00:09:07.860 4411.077 - 4436.283: 0.0420% ( 1) 00:09:07.860 4436.283 - 4461.489: 0.0631% ( 3) 00:09:07.860 4461.489 - 4486.695: 0.0771% ( 2) 00:09:07.860 4486.695 - 4511.902: 0.0911% ( 2) 00:09:07.860 4511.902 - 4537.108: 0.1051% ( 2) 00:09:07.860 4537.108 - 4562.314: 0.1331% ( 4) 00:09:07.860 4562.314 - 4587.520: 0.1471% ( 2) 00:09:07.861 4587.520 - 4612.726: 0.1612% ( 2) 00:09:07.861 4612.726 - 4637.932: 0.1752% ( 2) 00:09:07.861 4637.932 - 4663.138: 0.1892% ( 2) 00:09:07.861 4663.138 - 4688.345: 0.2032% ( 2) 00:09:07.861 4688.345 - 4713.551: 0.2172% ( 2) 00:09:07.861 4713.551 - 4738.757: 0.2312% ( 2) 00:09:07.861 4738.757 - 4763.963: 0.2452% ( 2) 00:09:07.861 4763.963 - 4789.169: 0.2592% ( 2) 00:09:07.861 4789.169 - 4814.375: 0.2803% ( 3) 00:09:07.861 4814.375 - 4839.582: 0.2943% ( 2) 00:09:07.861 4839.582 - 4864.788: 0.3083% ( 2) 00:09:07.861 4864.788 - 4889.994: 0.3223% ( 2) 00:09:07.861 4889.994 - 4915.200: 0.3363% ( 2) 00:09:07.861 4915.200 - 4940.406: 0.3573% ( 3) 00:09:07.861 4940.406 - 4965.612: 0.3714% ( 2) 00:09:07.861 4965.612 - 4990.818: 0.3854% ( 2) 00:09:07.861 4990.818 - 5016.025: 0.3994% ( 2) 00:09:07.861 5016.025 - 5041.231: 0.4204% ( 3) 00:09:07.861 5041.231 - 5066.437: 0.4344% ( 2) 00:09:07.861 5066.437 - 5091.643: 0.4484% ( 2) 00:09:07.861 5923.446 - 5948.652: 0.5045% ( 8) 00:09:07.861 5948.652 - 5973.858: 0.5395% ( 5) 00:09:07.861 5973.858 - 5999.065: 0.6726% ( 19) 00:09:07.861 5999.065 - 6024.271: 0.9529% ( 40) 00:09:07.861 6024.271 - 6049.477: 1.5415% ( 84) 00:09:07.861 6049.477 - 6074.683: 2.4944% ( 136) 00:09:07.861 6074.683 - 6099.889: 3.9518% ( 208) 00:09:07.861 6099.889 - 6125.095: 5.6825% ( 247) 00:09:07.861 6125.095 - 6150.302: 7.4552% ( 253) 00:09:07.861 6150.302 - 6175.508: 9.3470% ( 270) 00:09:07.861 6175.508 - 6200.714: 11.1827% ( 262) 00:09:07.861 6200.714 - 6225.920: 12.9765% ( 256) 00:09:07.861 6225.920 - 6251.126: 14.6090% ( 233) 00:09:07.861 6251.126 - 6276.332: 16.2696% ( 237) 00:09:07.861 6276.332 - 6301.538: 17.9022% ( 233) 00:09:07.861 6301.538 - 6326.745: 19.5348% ( 233) 00:09:07.861 6326.745 - 6351.951: 21.2304% ( 242) 00:09:07.861 6351.951 - 6377.157: 23.0031% ( 253) 00:09:07.861 6377.157 - 6402.363: 24.7898% ( 255) 00:09:07.861 6402.363 - 6427.569: 26.7237% ( 276) 00:09:07.861 6427.569 - 6452.775: 28.5594% ( 262) 00:09:07.861 6452.775 - 6503.188: 32.1609% ( 514) 00:09:07.861 6503.188 - 6553.600: 35.6853% ( 503) 00:09:07.861 6553.600 - 6604.012: 39.1606% ( 496) 00:09:07.861 6604.012 - 6654.425: 42.6570% ( 499) 00:09:07.861 6654.425 - 6704.837: 46.2514% ( 513) 00:09:07.861 6704.837 - 6755.249: 49.7758% ( 503) 00:09:07.861 6755.249 - 6805.662: 53.3282% ( 507) 00:09:07.861 6805.662 - 6856.074: 56.7615% ( 490) 00:09:07.861 6856.074 - 6906.486: 60.2088% ( 492) 00:09:07.861 6906.486 - 6956.898: 62.8573% ( 378) 00:09:07.861 6956.898 - 7007.311: 64.4128% ( 222) 00:09:07.861 7007.311 - 7057.723: 65.0014% ( 84) 00:09:07.861 7057.723 - 7108.135: 65.3237% ( 46) 00:09:07.861 7108.135 - 7158.548: 65.6040% ( 40) 00:09:07.861 7158.548 - 7208.960: 65.7932% ( 27) 00:09:07.861 7208.960 - 7259.372: 65.8913% ( 14) 00:09:07.861 7259.372 - 7309.785: 65.9403% ( 7) 00:09:07.861 7309.785 - 7360.197: 65.9893% ( 7) 00:09:07.861 7360.197 - 7410.609: 66.0314% ( 6) 00:09:07.861 7410.609 - 7461.022: 66.1085% ( 11) 00:09:07.861 7461.022 - 7511.434: 66.2066% ( 14) 00:09:07.861 7511.434 - 7561.846: 66.3187% ( 16) 00:09:07.861 7561.846 - 7612.258: 66.4378% ( 17) 00:09:07.861 7612.258 - 7662.671: 66.5709% ( 19) 00:09:07.861 7662.671 - 7713.083: 66.7040% ( 19) 00:09:07.861 7713.083 - 7763.495: 66.8161% ( 16) 00:09:07.861 7763.495 - 7813.908: 66.9283% ( 16) 00:09:07.861 7813.908 - 7864.320: 67.0754% ( 21) 00:09:07.861 7864.320 - 7914.732: 67.1945% ( 17) 00:09:07.861 7914.732 - 7965.145: 67.3136% ( 17) 00:09:07.861 7965.145 - 8015.557: 67.4257% ( 16) 00:09:07.861 8015.557 - 8065.969: 67.5098% ( 12) 00:09:07.861 8065.969 - 8116.382: 67.6570% ( 21) 00:09:07.861 8116.382 - 8166.794: 67.7691% ( 16) 00:09:07.861 8166.794 - 8217.206: 67.8882% ( 17) 00:09:07.861 8217.206 - 8267.618: 68.0073% ( 17) 00:09:07.861 8267.618 - 8318.031: 68.1334% ( 18) 00:09:07.861 8318.031 - 8368.443: 68.2455% ( 16) 00:09:07.861 8368.443 - 8418.855: 68.3156% ( 10) 00:09:07.861 8418.855 - 8469.268: 68.3716% ( 8) 00:09:07.861 8469.268 - 8519.680: 68.3997% ( 4) 00:09:07.861 8519.680 - 8570.092: 68.4207% ( 3) 00:09:07.861 8570.092 - 8620.505: 68.4487% ( 4) 00:09:07.861 8620.505 - 8670.917: 68.5328% ( 12) 00:09:07.861 8670.917 - 8721.329: 68.5678% ( 5) 00:09:07.861 8721.329 - 8771.742: 68.6099% ( 6) 00:09:07.861 8771.742 - 8822.154: 68.6659% ( 8) 00:09:07.861 8822.154 - 8872.566: 68.7080% ( 6) 00:09:07.861 8872.566 - 8922.978: 68.7850% ( 11) 00:09:07.861 8922.978 - 8973.391: 68.8621% ( 11) 00:09:07.861 8973.391 - 9023.803: 68.9322% ( 10) 00:09:07.861 9023.803 - 9074.215: 69.0022% ( 10) 00:09:07.861 9074.215 - 9124.628: 69.0723% ( 10) 00:09:07.861 9124.628 - 9175.040: 69.1634% ( 13) 00:09:07.861 9175.040 - 9225.452: 69.2265% ( 9) 00:09:07.861 9225.452 - 9275.865: 69.2965% ( 10) 00:09:07.861 9275.865 - 9326.277: 69.3666% ( 10) 00:09:07.861 9326.277 - 9376.689: 69.4437% ( 11) 00:09:07.861 9376.689 - 9427.102: 69.5277% ( 12) 00:09:07.861 9427.102 - 9477.514: 69.6258% ( 14) 00:09:07.861 9477.514 - 9527.926: 69.7169% ( 13) 00:09:07.861 9527.926 - 9578.338: 69.7940% ( 11) 00:09:07.861 9578.338 - 9628.751: 69.8641% ( 10) 00:09:07.861 9628.751 - 9679.163: 69.9271% ( 9) 00:09:07.861 9679.163 - 9729.575: 70.0042% ( 11) 00:09:07.861 9729.575 - 9779.988: 70.0743% ( 10) 00:09:07.861 9779.988 - 9830.400: 70.1794% ( 15) 00:09:07.861 9830.400 - 9880.812: 70.2705% ( 13) 00:09:07.861 9880.812 - 9931.225: 70.3335% ( 9) 00:09:07.861 9931.225 - 9981.637: 70.3896% ( 8) 00:09:07.861 9981.637 - 10032.049: 70.4596% ( 10) 00:09:07.861 10032.049 - 10082.462: 70.5157% ( 8) 00:09:07.861 10082.462 - 10132.874: 70.5788% ( 9) 00:09:07.861 10132.874 - 10183.286: 70.6418% ( 9) 00:09:07.861 10183.286 - 10233.698: 70.6979% ( 8) 00:09:07.861 10233.698 - 10284.111: 70.7749% ( 11) 00:09:07.861 10284.111 - 10334.523: 70.8240% ( 7) 00:09:07.861 10334.523 - 10384.935: 70.8660% ( 6) 00:09:07.861 10384.935 - 10435.348: 70.9151% ( 7) 00:09:07.861 10435.348 - 10485.760: 70.9992% ( 12) 00:09:07.861 10485.760 - 10536.172: 71.0552% ( 8) 00:09:07.861 10536.172 - 10586.585: 71.1183% ( 9) 00:09:07.861 10586.585 - 10636.997: 71.1813% ( 9) 00:09:07.861 10636.997 - 10687.409: 71.2444% ( 9) 00:09:07.861 10687.409 - 10737.822: 71.2934% ( 7) 00:09:07.861 10737.822 - 10788.234: 71.3355% ( 6) 00:09:07.861 10788.234 - 10838.646: 71.3705% ( 5) 00:09:07.861 10838.646 - 10889.058: 71.4406% ( 10) 00:09:07.861 10889.058 - 10939.471: 71.5107% ( 10) 00:09:07.861 10939.471 - 10989.883: 71.5597% ( 7) 00:09:07.861 10989.883 - 11040.295: 71.6158% ( 8) 00:09:07.861 11040.295 - 11090.708: 71.6858% ( 10) 00:09:07.861 11090.708 - 11141.120: 71.7699% ( 12) 00:09:07.861 11141.120 - 11191.532: 71.8400% ( 10) 00:09:07.861 11191.532 - 11241.945: 71.9381% ( 14) 00:09:07.861 11241.945 - 11292.357: 72.0221% ( 12) 00:09:07.861 11292.357 - 11342.769: 72.0922% ( 10) 00:09:07.861 11342.769 - 11393.182: 72.1623% ( 10) 00:09:07.861 11393.182 - 11443.594: 72.2113% ( 7) 00:09:07.861 11443.594 - 11494.006: 72.2884% ( 11) 00:09:07.861 11494.006 - 11544.418: 72.3585% ( 10) 00:09:07.861 11544.418 - 11594.831: 72.4355% ( 11) 00:09:07.861 11594.831 - 11645.243: 72.5266% ( 13) 00:09:07.861 11645.243 - 11695.655: 72.6247% ( 14) 00:09:07.861 11695.655 - 11746.068: 72.7578% ( 19) 00:09:07.861 11746.068 - 11796.480: 72.8840% ( 18) 00:09:07.861 11796.480 - 11846.892: 73.0171% ( 19) 00:09:07.861 11846.892 - 11897.305: 73.1642% ( 21) 00:09:07.861 11897.305 - 11947.717: 73.3114% ( 21) 00:09:07.861 11947.717 - 11998.129: 73.4095% ( 14) 00:09:07.861 11998.129 - 12048.542: 73.5076% ( 14) 00:09:07.861 12048.542 - 12098.954: 73.6407% ( 19) 00:09:07.861 12098.954 - 12149.366: 73.8089% ( 24) 00:09:07.861 12149.366 - 12199.778: 73.9210% ( 16) 00:09:07.861 12199.778 - 12250.191: 74.0681% ( 21) 00:09:07.861 12250.191 - 12300.603: 74.2223% ( 22) 00:09:07.861 12300.603 - 12351.015: 74.3484% ( 18) 00:09:07.861 12351.015 - 12401.428: 74.4535% ( 15) 00:09:07.861 12401.428 - 12451.840: 74.6427% ( 27) 00:09:07.861 12451.840 - 12502.252: 74.8388% ( 28) 00:09:07.861 12502.252 - 12552.665: 75.0000% ( 23) 00:09:07.861 12552.665 - 12603.077: 75.2522% ( 36) 00:09:07.861 12603.077 - 12653.489: 75.4975% ( 35) 00:09:07.861 12653.489 - 12703.902: 75.7707% ( 39) 00:09:07.861 12703.902 - 12754.314: 76.1421% ( 53) 00:09:07.861 12754.314 - 12804.726: 76.4924% ( 50) 00:09:07.861 12804.726 - 12855.138: 76.8077% ( 45) 00:09:07.861 12855.138 - 12905.551: 77.1371% ( 47) 00:09:07.861 12905.551 - 13006.375: 77.7256% ( 84) 00:09:07.861 13006.375 - 13107.200: 78.5104% ( 112) 00:09:07.861 13107.200 - 13208.025: 79.2671% ( 108) 00:09:07.861 13208.025 - 13308.849: 80.1149% ( 121) 00:09:07.861 13308.849 - 13409.674: 81.0118% ( 128) 00:09:07.861 13409.674 - 13510.498: 81.9577% ( 135) 00:09:07.862 13510.498 - 13611.323: 82.8896% ( 133) 00:09:07.862 13611.323 - 13712.148: 83.6673% ( 111) 00:09:07.862 13712.148 - 13812.972: 84.3680% ( 100) 00:09:07.862 13812.972 - 13913.797: 85.0476% ( 97) 00:09:07.862 13913.797 - 14014.622: 85.6362% ( 84) 00:09:07.862 14014.622 - 14115.446: 86.1827% ( 78) 00:09:07.862 14115.446 - 14216.271: 86.7293% ( 78) 00:09:07.862 14216.271 - 14317.095: 87.1917% ( 66) 00:09:07.862 14317.095 - 14417.920: 87.6121% ( 60) 00:09:07.862 14417.920 - 14518.745: 88.0465% ( 62) 00:09:07.862 14518.745 - 14619.569: 88.4389% ( 56) 00:09:07.862 14619.569 - 14720.394: 88.9084% ( 67) 00:09:07.862 14720.394 - 14821.218: 89.3288% ( 60) 00:09:07.862 14821.218 - 14922.043: 89.6861% ( 51) 00:09:07.862 14922.043 - 15022.868: 90.0645% ( 54) 00:09:07.862 15022.868 - 15123.692: 90.4849% ( 60) 00:09:07.862 15123.692 - 15224.517: 90.9263% ( 63) 00:09:07.862 15224.517 - 15325.342: 91.4658% ( 77) 00:09:07.862 15325.342 - 15426.166: 91.9773% ( 73) 00:09:07.862 15426.166 - 15526.991: 92.4888% ( 73) 00:09:07.862 15526.991 - 15627.815: 92.9372% ( 64) 00:09:07.862 15627.815 - 15728.640: 93.3576% ( 60) 00:09:07.862 15728.640 - 15829.465: 93.7710% ( 59) 00:09:07.862 15829.465 - 15930.289: 94.1424% ( 53) 00:09:07.862 15930.289 - 16031.114: 94.5768% ( 62) 00:09:07.862 16031.114 - 16131.938: 95.0042% ( 61) 00:09:07.862 16131.938 - 16232.763: 95.3265% ( 46) 00:09:07.862 16232.763 - 16333.588: 95.6138% ( 41) 00:09:07.862 16333.588 - 16434.412: 95.8240% ( 30) 00:09:07.862 16434.412 - 16535.237: 96.0132% ( 27) 00:09:07.862 16535.237 - 16636.062: 96.1813% ( 24) 00:09:07.862 16636.062 - 16736.886: 96.3425% ( 23) 00:09:07.862 16736.886 - 16837.711: 96.5107% ( 24) 00:09:07.862 16837.711 - 16938.535: 96.6508% ( 20) 00:09:07.862 16938.535 - 17039.360: 96.7419% ( 13) 00:09:07.862 17039.360 - 17140.185: 96.7839% ( 6) 00:09:07.862 17140.185 - 17241.009: 96.8119% ( 4) 00:09:07.862 17241.009 - 17341.834: 96.8330% ( 3) 00:09:07.862 17341.834 - 17442.658: 96.8470% ( 2) 00:09:07.862 17442.658 - 17543.483: 96.8610% ( 2) 00:09:07.862 17543.483 - 17644.308: 96.9030% ( 6) 00:09:07.862 17644.308 - 17745.132: 96.9451% ( 6) 00:09:07.862 17745.132 - 17845.957: 96.9871% ( 6) 00:09:07.862 17845.957 - 17946.782: 97.0291% ( 6) 00:09:07.862 17946.782 - 18047.606: 97.0712% ( 6) 00:09:07.862 18047.606 - 18148.431: 97.1202% ( 7) 00:09:07.862 18148.431 - 18249.255: 97.1623% ( 6) 00:09:07.862 18249.255 - 18350.080: 97.2744% ( 16) 00:09:07.862 18350.080 - 18450.905: 97.4075% ( 19) 00:09:07.862 18450.905 - 18551.729: 97.5757% ( 24) 00:09:07.862 18551.729 - 18652.554: 97.7578% ( 26) 00:09:07.862 18652.554 - 18753.378: 97.8910% ( 19) 00:09:07.862 18753.378 - 18854.203: 98.0311% ( 20) 00:09:07.862 18854.203 - 18955.028: 98.1783% ( 21) 00:09:07.862 18955.028 - 19055.852: 98.3114% ( 19) 00:09:07.862 19055.852 - 19156.677: 98.4655% ( 22) 00:09:07.862 19156.677 - 19257.502: 98.5916% ( 18) 00:09:07.862 19257.502 - 19358.326: 98.7248% ( 19) 00:09:07.862 19358.326 - 19459.151: 98.8509% ( 18) 00:09:07.862 19459.151 - 19559.975: 98.9350% ( 12) 00:09:07.862 19559.975 - 19660.800: 98.9840% ( 7) 00:09:07.862 19660.800 - 19761.625: 99.0331% ( 7) 00:09:07.862 19761.625 - 19862.449: 99.0751% ( 6) 00:09:07.862 19862.449 - 19963.274: 99.1101% ( 5) 00:09:07.862 19963.274 - 20064.098: 99.1662% ( 8) 00:09:07.862 20064.098 - 20164.923: 99.2082% ( 6) 00:09:07.862 20164.923 - 20265.748: 99.2503% ( 6) 00:09:07.862 20265.748 - 20366.572: 99.3063% ( 8) 00:09:07.862 20366.572 - 20467.397: 99.3484% ( 6) 00:09:07.862 20467.397 - 20568.222: 99.3904% ( 6) 00:09:07.862 20568.222 - 20669.046: 99.4184% ( 4) 00:09:07.862 20669.046 - 20769.871: 99.4395% ( 3) 00:09:07.862 20769.871 - 20870.695: 99.4605% ( 3) 00:09:07.862 20870.695 - 20971.520: 99.4885% ( 4) 00:09:07.862 20971.520 - 21072.345: 99.5095% ( 3) 00:09:07.862 21072.345 - 21173.169: 99.5376% ( 4) 00:09:07.862 21173.169 - 21273.994: 99.5516% ( 2) 00:09:07.862 27424.295 - 27625.945: 99.5586% ( 1) 00:09:07.862 27625.945 - 27827.594: 99.6006% ( 6) 00:09:07.862 27827.594 - 28029.243: 99.6427% ( 6) 00:09:07.862 28029.243 - 28230.892: 99.6987% ( 8) 00:09:07.862 28230.892 - 28432.542: 99.7548% ( 8) 00:09:07.862 28432.542 - 28634.191: 99.8388% ( 12) 00:09:07.862 28634.191 - 28835.840: 99.9229% ( 12) 00:09:07.862 28835.840 - 29037.489: 100.0000% ( 11) 00:09:07.862 00:09:07.862 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:07.862 ============================================================================== 00:09:07.862 Range in us Cumulative IO count 00:09:07.862 3881.748 - 3906.954: 0.0140% ( 2) 00:09:07.862 3906.954 - 3932.160: 0.0350% ( 3) 00:09:07.862 3932.160 - 3957.366: 0.0561% ( 3) 00:09:07.862 3957.366 - 3982.572: 0.0701% ( 2) 00:09:07.862 3982.572 - 4007.778: 0.0771% ( 1) 00:09:07.862 4007.778 - 4032.985: 0.0981% ( 3) 00:09:07.862 4032.985 - 4058.191: 0.1121% ( 2) 00:09:07.862 4058.191 - 4083.397: 0.1261% ( 2) 00:09:07.862 4083.397 - 4108.603: 0.1541% ( 4) 00:09:07.862 4108.603 - 4133.809: 0.1682% ( 2) 00:09:07.862 4133.809 - 4159.015: 0.1892% ( 3) 00:09:07.862 4159.015 - 4184.222: 0.2032% ( 2) 00:09:07.862 4184.222 - 4209.428: 0.2242% ( 3) 00:09:07.862 4209.428 - 4234.634: 0.2382% ( 2) 00:09:07.862 4234.634 - 4259.840: 0.2522% ( 2) 00:09:07.862 4259.840 - 4285.046: 0.2663% ( 2) 00:09:07.862 4285.046 - 4310.252: 0.2803% ( 2) 00:09:07.862 4310.252 - 4335.458: 0.2943% ( 2) 00:09:07.862 4335.458 - 4360.665: 0.3083% ( 2) 00:09:07.862 4360.665 - 4385.871: 0.3223% ( 2) 00:09:07.862 4385.871 - 4411.077: 0.3363% ( 2) 00:09:07.862 4411.077 - 4436.283: 0.3573% ( 3) 00:09:07.862 4436.283 - 4461.489: 0.3714% ( 2) 00:09:07.862 4461.489 - 4486.695: 0.3854% ( 2) 00:09:07.862 4486.695 - 4511.902: 0.4064% ( 3) 00:09:07.862 4511.902 - 4537.108: 0.4204% ( 2) 00:09:07.862 4537.108 - 4562.314: 0.4344% ( 2) 00:09:07.862 4562.314 - 4587.520: 0.4484% ( 2) 00:09:07.862 5444.529 - 5469.735: 0.5045% ( 8) 00:09:07.862 5469.735 - 5494.942: 0.5115% ( 1) 00:09:07.862 5494.942 - 5520.148: 0.5325% ( 3) 00:09:07.862 5545.354 - 5570.560: 0.5465% ( 2) 00:09:07.862 5570.560 - 5595.766: 0.5605% ( 2) 00:09:07.862 5595.766 - 5620.972: 0.5746% ( 2) 00:09:07.862 5620.972 - 5646.178: 0.5956% ( 3) 00:09:07.862 5646.178 - 5671.385: 0.6096% ( 2) 00:09:07.862 5671.385 - 5696.591: 0.6236% ( 2) 00:09:07.862 5696.591 - 5721.797: 0.6376% ( 2) 00:09:07.862 5721.797 - 5747.003: 0.6586% ( 3) 00:09:07.862 5747.003 - 5772.209: 0.6726% ( 2) 00:09:07.862 5772.209 - 5797.415: 0.6867% ( 2) 00:09:07.862 5797.415 - 5822.622: 0.7077% ( 3) 00:09:07.862 5822.622 - 5847.828: 0.7217% ( 2) 00:09:07.862 5847.828 - 5873.034: 0.7357% ( 2) 00:09:07.862 5873.034 - 5898.240: 0.7567% ( 3) 00:09:07.862 5898.240 - 5923.446: 0.7707% ( 2) 00:09:07.862 5923.446 - 5948.652: 0.7918% ( 3) 00:09:07.862 5948.652 - 5973.858: 0.8548% ( 9) 00:09:07.862 5973.858 - 5999.065: 0.9459% ( 13) 00:09:07.862 5999.065 - 6024.271: 1.2332% ( 41) 00:09:07.862 6024.271 - 6049.477: 1.9268% ( 99) 00:09:07.862 6049.477 - 6074.683: 3.0830% ( 165) 00:09:07.862 6074.683 - 6099.889: 4.4913% ( 201) 00:09:07.862 6099.889 - 6125.095: 6.2640% ( 253) 00:09:07.862 6125.095 - 6150.302: 8.0998% ( 262) 00:09:07.862 6150.302 - 6175.508: 9.7744% ( 239) 00:09:07.862 6175.508 - 6200.714: 11.6242% ( 264) 00:09:07.862 6200.714 - 6225.920: 13.3478% ( 246) 00:09:07.862 6225.920 - 6251.126: 15.0575% ( 244) 00:09:07.862 6251.126 - 6276.332: 16.6129% ( 222) 00:09:07.862 6276.332 - 6301.538: 18.2525% ( 234) 00:09:07.862 6301.538 - 6326.745: 19.9832% ( 247) 00:09:07.862 6326.745 - 6351.951: 21.6228% ( 234) 00:09:07.862 6351.951 - 6377.157: 23.3744% ( 250) 00:09:07.862 6377.157 - 6402.363: 25.0701% ( 242) 00:09:07.862 6402.363 - 6427.569: 26.8988% ( 261) 00:09:07.862 6427.569 - 6452.775: 28.6785% ( 254) 00:09:07.862 6452.775 - 6503.188: 32.2169% ( 505) 00:09:07.862 6503.188 - 6553.600: 35.6853% ( 495) 00:09:07.862 6553.600 - 6604.012: 39.1816% ( 499) 00:09:07.862 6604.012 - 6654.425: 42.6499% ( 495) 00:09:07.862 6654.425 - 6704.837: 46.1253% ( 496) 00:09:07.862 6704.837 - 6755.249: 49.5866% ( 494) 00:09:07.862 6755.249 - 6805.662: 53.1881% ( 514) 00:09:07.862 6805.662 - 6856.074: 56.6284% ( 491) 00:09:07.863 6856.074 - 6906.486: 60.0547% ( 489) 00:09:07.863 6906.486 - 6956.898: 62.7032% ( 378) 00:09:07.863 6956.898 - 7007.311: 64.2937% ( 227) 00:09:07.863 7007.311 - 7057.723: 65.0294% ( 105) 00:09:07.863 7057.723 - 7108.135: 65.5199% ( 70) 00:09:07.863 7108.135 - 7158.548: 65.8492% ( 47) 00:09:07.863 7158.548 - 7208.960: 66.1855% ( 48) 00:09:07.863 7208.960 - 7259.372: 66.3817% ( 28) 00:09:07.863 7259.372 - 7309.785: 66.5219% ( 20) 00:09:07.863 7309.785 - 7360.197: 66.6550% ( 19) 00:09:07.863 7360.197 - 7410.609: 66.7531% ( 14) 00:09:07.863 7410.609 - 7461.022: 66.8512% ( 14) 00:09:07.863 7461.022 - 7511.434: 66.9423% ( 13) 00:09:07.863 7511.434 - 7561.846: 67.0263% ( 12) 00:09:07.863 7561.846 - 7612.258: 67.0824% ( 8) 00:09:07.863 7612.258 - 7662.671: 67.1385% ( 8) 00:09:07.863 7662.671 - 7713.083: 67.1805% ( 6) 00:09:07.863 7713.083 - 7763.495: 67.2295% ( 7) 00:09:07.863 7763.495 - 7813.908: 67.2786% ( 7) 00:09:07.863 7813.908 - 7864.320: 67.3136% ( 5) 00:09:07.863 7864.320 - 7914.732: 67.3346% ( 3) 00:09:07.863 7914.732 - 7965.145: 67.3627% ( 4) 00:09:07.863 7965.145 - 8015.557: 67.3837% ( 3) 00:09:07.863 8015.557 - 8065.969: 67.4117% ( 4) 00:09:07.863 8065.969 - 8116.382: 67.4327% ( 3) 00:09:07.863 8116.382 - 8166.794: 67.4538% ( 3) 00:09:07.863 8166.794 - 8217.206: 67.4958% ( 6) 00:09:07.863 8217.206 - 8267.618: 67.5518% ( 8) 00:09:07.863 8267.618 - 8318.031: 67.5939% ( 6) 00:09:07.863 8318.031 - 8368.443: 67.6359% ( 6) 00:09:07.863 8368.443 - 8418.855: 67.6920% ( 8) 00:09:07.863 8418.855 - 8469.268: 67.7480% ( 8) 00:09:07.863 8469.268 - 8519.680: 67.7971% ( 7) 00:09:07.863 8519.680 - 8570.092: 67.8391% ( 6) 00:09:07.863 8570.092 - 8620.505: 67.8882% ( 7) 00:09:07.863 8620.505 - 8670.917: 67.9652% ( 11) 00:09:07.863 8670.917 - 8721.329: 68.0283% ( 9) 00:09:07.863 8721.329 - 8771.742: 68.0703% ( 6) 00:09:07.863 8771.742 - 8822.154: 68.1404% ( 10) 00:09:07.863 8822.154 - 8872.566: 68.2315% ( 13) 00:09:07.863 8872.566 - 8922.978: 68.3226% ( 13) 00:09:07.863 8922.978 - 8973.391: 68.4207% ( 14) 00:09:07.863 8973.391 - 9023.803: 68.5748% ( 22) 00:09:07.863 9023.803 - 9074.215: 68.7220% ( 21) 00:09:07.863 9074.215 - 9124.628: 68.8691% ( 21) 00:09:07.863 9124.628 - 9175.040: 69.0092% ( 20) 00:09:07.863 9175.040 - 9225.452: 69.1634% ( 22) 00:09:07.863 9225.452 - 9275.865: 69.2965% ( 19) 00:09:07.863 9275.865 - 9326.277: 69.4647% ( 24) 00:09:07.863 9326.277 - 9376.689: 69.6258% ( 23) 00:09:07.863 9376.689 - 9427.102: 69.8010% ( 25) 00:09:07.863 9427.102 - 9477.514: 69.9622% ( 23) 00:09:07.863 9477.514 - 9527.926: 70.1163% ( 22) 00:09:07.863 9527.926 - 9578.338: 70.2564% ( 20) 00:09:07.863 9578.338 - 9628.751: 70.4106% ( 22) 00:09:07.863 9628.751 - 9679.163: 70.5437% ( 19) 00:09:07.863 9679.163 - 9729.575: 70.6909% ( 21) 00:09:07.863 9729.575 - 9779.988: 70.8030% ( 16) 00:09:07.863 9779.988 - 9830.400: 70.8941% ( 13) 00:09:07.863 9830.400 - 9880.812: 70.9922% ( 14) 00:09:07.863 9880.812 - 9931.225: 71.0762% ( 12) 00:09:07.863 9931.225 - 9981.637: 71.1253% ( 7) 00:09:07.863 9981.637 - 10032.049: 71.1743% ( 7) 00:09:07.863 10032.049 - 10082.462: 71.2164% ( 6) 00:09:07.863 10082.462 - 10132.874: 71.2514% ( 5) 00:09:07.863 10132.874 - 10183.286: 71.2724% ( 3) 00:09:07.863 10183.286 - 10233.698: 71.3004% ( 4) 00:09:07.863 10636.997 - 10687.409: 71.3075% ( 1) 00:09:07.863 10687.409 - 10737.822: 71.3355% ( 4) 00:09:07.863 10737.822 - 10788.234: 71.3565% ( 3) 00:09:07.863 10788.234 - 10838.646: 71.3845% ( 4) 00:09:07.863 10838.646 - 10889.058: 71.4055% ( 3) 00:09:07.863 10889.058 - 10939.471: 71.4336% ( 4) 00:09:07.863 10939.471 - 10989.883: 71.4826% ( 7) 00:09:07.863 10989.883 - 11040.295: 71.5177% ( 5) 00:09:07.863 11040.295 - 11090.708: 71.5527% ( 5) 00:09:07.863 11090.708 - 11141.120: 71.5947% ( 6) 00:09:07.863 11141.120 - 11191.532: 71.6438% ( 7) 00:09:07.863 11191.532 - 11241.945: 71.7068% ( 9) 00:09:07.863 11241.945 - 11292.357: 71.7769% ( 10) 00:09:07.863 11292.357 - 11342.769: 71.8400% ( 9) 00:09:07.863 11342.769 - 11393.182: 71.9170% ( 11) 00:09:07.863 11393.182 - 11443.594: 71.9941% ( 11) 00:09:07.863 11443.594 - 11494.006: 72.0992% ( 15) 00:09:07.863 11494.006 - 11544.418: 72.1973% ( 14) 00:09:07.863 11544.418 - 11594.831: 72.3094% ( 16) 00:09:07.863 11594.831 - 11645.243: 72.4075% ( 14) 00:09:07.863 11645.243 - 11695.655: 72.4986% ( 13) 00:09:07.863 11695.655 - 11746.068: 72.6177% ( 17) 00:09:07.863 11746.068 - 11796.480: 72.7508% ( 19) 00:09:07.863 11796.480 - 11846.892: 72.8700% ( 17) 00:09:07.863 11846.892 - 11897.305: 73.0451% ( 25) 00:09:07.863 11897.305 - 11947.717: 73.2063% ( 23) 00:09:07.863 11947.717 - 11998.129: 73.3464% ( 20) 00:09:07.863 11998.129 - 12048.542: 73.4936% ( 21) 00:09:07.863 12048.542 - 12098.954: 73.6477% ( 22) 00:09:07.863 12098.954 - 12149.366: 73.8089% ( 23) 00:09:07.863 12149.366 - 12199.778: 74.0261% ( 31) 00:09:07.863 12199.778 - 12250.191: 74.2012% ( 25) 00:09:07.863 12250.191 - 12300.603: 74.3694% ( 24) 00:09:07.863 12300.603 - 12351.015: 74.5446% ( 25) 00:09:07.863 12351.015 - 12401.428: 74.7478% ( 29) 00:09:07.863 12401.428 - 12451.840: 74.9860% ( 34) 00:09:07.863 12451.840 - 12502.252: 75.1892% ( 29) 00:09:07.863 12502.252 - 12552.665: 75.4064% ( 31) 00:09:07.863 12552.665 - 12603.077: 75.6236% ( 31) 00:09:07.863 12603.077 - 12653.489: 75.8618% ( 34) 00:09:07.863 12653.489 - 12703.902: 76.1771% ( 45) 00:09:07.863 12703.902 - 12754.314: 76.4714% ( 42) 00:09:07.863 12754.314 - 12804.726: 76.7937% ( 46) 00:09:07.863 12804.726 - 12855.138: 77.1300% ( 48) 00:09:07.863 12855.138 - 12905.551: 77.4103% ( 40) 00:09:07.863 12905.551 - 13006.375: 78.0269% ( 88) 00:09:07.863 13006.375 - 13107.200: 78.8327% ( 115) 00:09:07.863 13107.200 - 13208.025: 79.7015% ( 124) 00:09:07.863 13208.025 - 13308.849: 80.6264% ( 132) 00:09:07.863 13308.849 - 13409.674: 81.4672% ( 120) 00:09:07.863 13409.674 - 13510.498: 82.2870% ( 117) 00:09:07.863 13510.498 - 13611.323: 83.1068% ( 117) 00:09:07.863 13611.323 - 13712.148: 83.8285% ( 103) 00:09:07.863 13712.148 - 13812.972: 84.5081% ( 97) 00:09:07.863 13812.972 - 13913.797: 85.1668% ( 94) 00:09:07.863 13913.797 - 14014.622: 85.8744% ( 101) 00:09:07.863 14014.622 - 14115.446: 86.6031% ( 104) 00:09:07.863 14115.446 - 14216.271: 87.1637% ( 80) 00:09:07.863 14216.271 - 14317.095: 87.5841% ( 60) 00:09:07.863 14317.095 - 14417.920: 87.9975% ( 59) 00:09:07.863 14417.920 - 14518.745: 88.4389% ( 63) 00:09:07.863 14518.745 - 14619.569: 88.9224% ( 69) 00:09:07.863 14619.569 - 14720.394: 89.3428% ( 60) 00:09:07.863 14720.394 - 14821.218: 89.7702% ( 61) 00:09:07.863 14821.218 - 14922.043: 90.1906% ( 60) 00:09:07.863 14922.043 - 15022.868: 90.6110% ( 60) 00:09:07.863 15022.868 - 15123.692: 91.0174% ( 58) 00:09:07.863 15123.692 - 15224.517: 91.3887% ( 53) 00:09:07.863 15224.517 - 15325.342: 91.7391% ( 50) 00:09:07.863 15325.342 - 15426.166: 92.0754% ( 48) 00:09:07.863 15426.166 - 15526.991: 92.3907% ( 45) 00:09:07.863 15526.991 - 15627.815: 92.6359% ( 35) 00:09:07.863 15627.815 - 15728.640: 92.9022% ( 38) 00:09:07.864 15728.640 - 15829.465: 93.2105% ( 44) 00:09:07.864 15829.465 - 15930.289: 93.4908% ( 40) 00:09:07.864 15930.289 - 16031.114: 93.8271% ( 48) 00:09:07.864 16031.114 - 16131.938: 94.0793% ( 36) 00:09:07.864 16131.938 - 16232.763: 94.3806% ( 43) 00:09:07.864 16232.763 - 16333.588: 94.6539% ( 39) 00:09:07.864 16333.588 - 16434.412: 94.9832% ( 47) 00:09:07.864 16434.412 - 16535.237: 95.2564% ( 39) 00:09:07.864 16535.237 - 16636.062: 95.4947% ( 34) 00:09:07.864 16636.062 - 16736.886: 95.7539% ( 37) 00:09:07.864 16736.886 - 16837.711: 96.0272% ( 39) 00:09:07.864 16837.711 - 16938.535: 96.2864% ( 37) 00:09:07.864 16938.535 - 17039.360: 96.4546% ( 24) 00:09:07.864 17039.360 - 17140.185: 96.6438% ( 27) 00:09:07.864 17140.185 - 17241.009: 96.8119% ( 24) 00:09:07.864 17241.009 - 17341.834: 96.9801% ( 24) 00:09:07.864 17341.834 - 17442.658: 97.1413% ( 23) 00:09:07.864 17442.658 - 17543.483: 97.2674% ( 18) 00:09:07.864 17543.483 - 17644.308: 97.3725% ( 15) 00:09:07.864 17644.308 - 17745.132: 97.4566% ( 12) 00:09:07.864 17745.132 - 17845.957: 97.5126% ( 8) 00:09:07.864 17845.957 - 17946.782: 97.5687% ( 8) 00:09:07.864 17946.782 - 18047.606: 97.6317% ( 9) 00:09:07.864 18047.606 - 18148.431: 97.6668% ( 5) 00:09:07.864 18148.431 - 18249.255: 97.6948% ( 4) 00:09:07.864 18249.255 - 18350.080: 97.7158% ( 3) 00:09:07.864 18350.080 - 18450.905: 97.7789% ( 9) 00:09:07.864 18450.905 - 18551.729: 97.8279% ( 7) 00:09:07.864 18551.729 - 18652.554: 97.8700% ( 6) 00:09:07.864 18652.554 - 18753.378: 97.9120% ( 6) 00:09:07.864 18753.378 - 18854.203: 97.9540% ( 6) 00:09:07.864 18854.203 - 18955.028: 98.0031% ( 7) 00:09:07.864 18955.028 - 19055.852: 98.0451% ( 6) 00:09:07.864 19055.852 - 19156.677: 98.1082% ( 9) 00:09:07.864 19156.677 - 19257.502: 98.1923% ( 12) 00:09:07.864 19257.502 - 19358.326: 98.2763% ( 12) 00:09:07.864 19358.326 - 19459.151: 98.3604% ( 12) 00:09:07.864 19459.151 - 19559.975: 98.4515% ( 13) 00:09:07.864 19559.975 - 19660.800: 98.5216% ( 10) 00:09:07.864 19660.800 - 19761.625: 98.5987% ( 11) 00:09:07.864 19761.625 - 19862.449: 98.6757% ( 11) 00:09:07.864 19862.449 - 19963.274: 98.7388% ( 9) 00:09:07.864 19963.274 - 20064.098: 98.7948% ( 8) 00:09:07.864 20064.098 - 20164.923: 98.8579% ( 9) 00:09:07.864 20164.923 - 20265.748: 98.9280% ( 10) 00:09:07.864 20265.748 - 20366.572: 99.0050% ( 11) 00:09:07.864 20366.572 - 20467.397: 99.0681% ( 9) 00:09:07.864 20467.397 - 20568.222: 99.1452% ( 11) 00:09:07.864 20568.222 - 20669.046: 99.2223% ( 11) 00:09:07.864 20669.046 - 20769.871: 99.2993% ( 11) 00:09:07.864 20769.871 - 20870.695: 99.3694% ( 10) 00:09:07.864 20870.695 - 20971.520: 99.4465% ( 11) 00:09:07.864 20971.520 - 21072.345: 99.4955% ( 7) 00:09:07.864 21072.345 - 21173.169: 99.5376% ( 6) 00:09:07.864 21173.169 - 21273.994: 99.5516% ( 2) 00:09:07.864 27827.594 - 28029.243: 99.5936% ( 6) 00:09:07.864 28029.243 - 28230.892: 99.6637% ( 10) 00:09:07.864 28230.892 - 28432.542: 99.7478% ( 12) 00:09:07.864 28432.542 - 28634.191: 99.8178% ( 10) 00:09:07.864 28634.191 - 28835.840: 99.9089% ( 13) 00:09:07.864 28835.840 - 29037.489: 99.9860% ( 11) 00:09:07.864 29037.489 - 29239.138: 100.0000% ( 2) 00:09:07.864 00:09:07.864 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:07.864 ============================================================================== 00:09:07.864 Range in us Cumulative IO count 00:09:07.864 3680.098 - 3705.305: 0.0070% ( 1) 00:09:07.864 3705.305 - 3730.511: 0.0490% ( 6) 00:09:07.864 3730.511 - 3755.717: 0.0631% ( 2) 00:09:07.864 3755.717 - 3780.923: 0.0771% ( 2) 00:09:07.864 3780.923 - 3806.129: 0.0911% ( 2) 00:09:07.864 3806.129 - 3831.335: 0.1191% ( 4) 00:09:07.864 3831.335 - 3856.542: 0.1331% ( 2) 00:09:07.864 3856.542 - 3881.748: 0.1471% ( 2) 00:09:07.864 3881.748 - 3906.954: 0.1612% ( 2) 00:09:07.864 3906.954 - 3932.160: 0.1752% ( 2) 00:09:07.864 3932.160 - 3957.366: 0.1892% ( 2) 00:09:07.864 3957.366 - 3982.572: 0.2032% ( 2) 00:09:07.864 3982.572 - 4007.778: 0.2172% ( 2) 00:09:07.864 4007.778 - 4032.985: 0.2312% ( 2) 00:09:07.864 4032.985 - 4058.191: 0.2452% ( 2) 00:09:07.864 4058.191 - 4083.397: 0.2663% ( 3) 00:09:07.864 4083.397 - 4108.603: 0.2803% ( 2) 00:09:07.864 4108.603 - 4133.809: 0.2943% ( 2) 00:09:07.864 4133.809 - 4159.015: 0.3083% ( 2) 00:09:07.864 4159.015 - 4184.222: 0.3293% ( 3) 00:09:07.864 4184.222 - 4209.428: 0.3433% ( 2) 00:09:07.864 4209.428 - 4234.634: 0.3573% ( 2) 00:09:07.864 4234.634 - 4259.840: 0.3714% ( 2) 00:09:07.864 4259.840 - 4285.046: 0.3924% ( 3) 00:09:07.864 4285.046 - 4310.252: 0.4064% ( 2) 00:09:07.864 4310.252 - 4335.458: 0.4204% ( 2) 00:09:07.864 4335.458 - 4360.665: 0.4414% ( 3) 00:09:07.864 4360.665 - 4385.871: 0.4484% ( 1) 00:09:07.864 5217.674 - 5242.880: 0.4835% ( 5) 00:09:07.864 5242.880 - 5268.086: 0.4975% ( 2) 00:09:07.864 5268.086 - 5293.292: 0.5045% ( 1) 00:09:07.864 5293.292 - 5318.498: 0.5255% ( 3) 00:09:07.864 5318.498 - 5343.705: 0.5395% ( 2) 00:09:07.864 5343.705 - 5368.911: 0.5535% ( 2) 00:09:07.864 5368.911 - 5394.117: 0.5675% ( 2) 00:09:07.864 5394.117 - 5419.323: 0.5886% ( 3) 00:09:07.864 5419.323 - 5444.529: 0.6026% ( 2) 00:09:07.864 5444.529 - 5469.735: 0.6166% ( 2) 00:09:07.864 5469.735 - 5494.942: 0.6306% ( 2) 00:09:07.864 5494.942 - 5520.148: 0.6516% ( 3) 00:09:07.864 5520.148 - 5545.354: 0.6656% ( 2) 00:09:07.864 5545.354 - 5570.560: 0.6797% ( 2) 00:09:07.864 5570.560 - 5595.766: 0.6937% ( 2) 00:09:07.864 5595.766 - 5620.972: 0.7007% ( 1) 00:09:07.864 5620.972 - 5646.178: 0.7147% ( 2) 00:09:07.864 5646.178 - 5671.385: 0.7357% ( 3) 00:09:07.864 5671.385 - 5696.591: 0.7497% ( 2) 00:09:07.864 5696.591 - 5721.797: 0.7637% ( 2) 00:09:07.864 5721.797 - 5747.003: 0.7777% ( 2) 00:09:07.864 5747.003 - 5772.209: 0.7988% ( 3) 00:09:07.864 5772.209 - 5797.415: 0.8128% ( 2) 00:09:07.864 5797.415 - 5822.622: 0.8268% ( 2) 00:09:07.865 5822.622 - 5847.828: 0.8478% ( 3) 00:09:07.865 5847.828 - 5873.034: 0.8618% ( 2) 00:09:07.865 5873.034 - 5898.240: 0.8688% ( 1) 00:09:07.865 5898.240 - 5923.446: 0.8899% ( 3) 00:09:07.865 5923.446 - 5948.652: 0.8969% ( 1) 00:09:07.865 5948.652 - 5973.858: 0.9039% ( 1) 00:09:07.865 5973.858 - 5999.065: 1.0230% ( 17) 00:09:07.865 5999.065 - 6024.271: 1.3523% ( 47) 00:09:07.865 6024.271 - 6049.477: 2.0179% ( 95) 00:09:07.865 6049.477 - 6074.683: 3.0549% ( 148) 00:09:07.865 6074.683 - 6099.889: 4.2531% ( 171) 00:09:07.865 6099.889 - 6125.095: 5.7876% ( 219) 00:09:07.865 6125.095 - 6150.302: 7.6233% ( 262) 00:09:07.865 6150.302 - 6175.508: 9.4451% ( 260) 00:09:07.865 6175.508 - 6200.714: 11.2038% ( 251) 00:09:07.865 6200.714 - 6225.920: 12.8924% ( 241) 00:09:07.865 6225.920 - 6251.126: 14.5460% ( 236) 00:09:07.865 6251.126 - 6276.332: 16.1645% ( 231) 00:09:07.865 6276.332 - 6301.538: 17.8041% ( 234) 00:09:07.865 6301.538 - 6326.745: 19.5908% ( 255) 00:09:07.865 6326.745 - 6351.951: 21.3635% ( 253) 00:09:07.865 6351.951 - 6377.157: 23.1362% ( 253) 00:09:07.865 6377.157 - 6402.363: 24.9019% ( 252) 00:09:07.865 6402.363 - 6427.569: 26.6816% ( 254) 00:09:07.865 6427.569 - 6452.775: 28.5384% ( 265) 00:09:07.865 6452.775 - 6503.188: 31.9857% ( 492) 00:09:07.865 6503.188 - 6553.600: 35.5381% ( 507) 00:09:07.865 6553.600 - 6604.012: 39.0975% ( 508) 00:09:07.865 6604.012 - 6654.425: 42.6079% ( 501) 00:09:07.865 6654.425 - 6704.837: 46.2164% ( 515) 00:09:07.865 6704.837 - 6755.249: 49.7968% ( 511) 00:09:07.865 6755.249 - 6805.662: 53.4053% ( 515) 00:09:07.865 6805.662 - 6856.074: 56.9016% ( 499) 00:09:07.865 6856.074 - 6906.486: 60.3700% ( 495) 00:09:07.865 6906.486 - 6956.898: 63.0605% ( 384) 00:09:07.865 6956.898 - 7007.311: 64.5950% ( 219) 00:09:07.865 7007.311 - 7057.723: 65.3307% ( 105) 00:09:07.865 7057.723 - 7108.135: 65.7441% ( 59) 00:09:07.865 7108.135 - 7158.548: 66.0874% ( 49) 00:09:07.865 7158.548 - 7208.960: 66.3817% ( 42) 00:09:07.865 7208.960 - 7259.372: 66.5709% ( 27) 00:09:07.865 7259.372 - 7309.785: 66.6690% ( 14) 00:09:07.865 7309.785 - 7360.197: 66.7601% ( 13) 00:09:07.865 7360.197 - 7410.609: 66.8582% ( 14) 00:09:07.865 7410.609 - 7461.022: 66.9493% ( 13) 00:09:07.865 7461.022 - 7511.434: 67.0334% ( 12) 00:09:07.865 7511.434 - 7561.846: 67.0894% ( 8) 00:09:07.865 7561.846 - 7612.258: 67.1244% ( 5) 00:09:07.865 7612.258 - 7662.671: 67.1455% ( 3) 00:09:07.865 7662.671 - 7713.083: 67.1735% ( 4) 00:09:07.865 7713.083 - 7763.495: 67.1945% ( 3) 00:09:07.865 7763.495 - 7813.908: 67.2225% ( 4) 00:09:07.865 7813.908 - 7864.320: 67.2436% ( 3) 00:09:07.865 7864.320 - 7914.732: 67.2646% ( 3) 00:09:07.865 7965.145 - 8015.557: 67.2856% ( 3) 00:09:07.865 8015.557 - 8065.969: 67.3066% ( 3) 00:09:07.865 8065.969 - 8116.382: 67.3276% ( 3) 00:09:07.865 8116.382 - 8166.794: 67.3627% ( 5) 00:09:07.865 8166.794 - 8217.206: 67.3837% ( 3) 00:09:07.865 8217.206 - 8267.618: 67.4117% ( 4) 00:09:07.865 8267.618 - 8318.031: 67.4327% ( 3) 00:09:07.865 8318.031 - 8368.443: 67.4538% ( 3) 00:09:07.865 8368.443 - 8418.855: 67.4818% ( 4) 00:09:07.865 8418.855 - 8469.268: 67.5589% ( 11) 00:09:07.865 8469.268 - 8519.680: 67.6359% ( 11) 00:09:07.865 8519.680 - 8570.092: 67.7270% ( 13) 00:09:07.865 8570.092 - 8620.505: 67.8461% ( 17) 00:09:07.865 8620.505 - 8670.917: 68.0143% ( 24) 00:09:07.865 8670.917 - 8721.329: 68.1614% ( 21) 00:09:07.865 8721.329 - 8771.742: 68.3086% ( 21) 00:09:07.865 8771.742 - 8822.154: 68.4487% ( 20) 00:09:07.865 8822.154 - 8872.566: 68.5888% ( 20) 00:09:07.865 8872.566 - 8922.978: 68.7430% ( 22) 00:09:07.865 8922.978 - 8973.391: 68.9182% ( 25) 00:09:07.865 8973.391 - 9023.803: 69.0793% ( 23) 00:09:07.865 9023.803 - 9074.215: 69.2335% ( 22) 00:09:07.865 9074.215 - 9124.628: 69.4226% ( 27) 00:09:07.865 9124.628 - 9175.040: 69.5838% ( 23) 00:09:07.865 9175.040 - 9225.452: 69.7590% ( 25) 00:09:07.865 9225.452 - 9275.865: 69.9201% ( 23) 00:09:07.865 9275.865 - 9326.277: 70.0953% ( 25) 00:09:07.865 9326.277 - 9376.689: 70.2214% ( 18) 00:09:07.865 9376.689 - 9427.102: 70.3545% ( 19) 00:09:07.865 9427.102 - 9477.514: 70.4456% ( 13) 00:09:07.865 9477.514 - 9527.926: 70.5577% ( 16) 00:09:07.865 9527.926 - 9578.338: 70.6348% ( 11) 00:09:07.865 9578.338 - 9628.751: 70.7049% ( 10) 00:09:07.865 9628.751 - 9679.163: 70.7749% ( 10) 00:09:07.865 9679.163 - 9729.575: 70.8450% ( 10) 00:09:07.865 9729.575 - 9779.988: 70.9221% ( 11) 00:09:07.865 9779.988 - 9830.400: 70.9781% ( 8) 00:09:07.865 9830.400 - 9880.812: 71.0202% ( 6) 00:09:07.865 9880.812 - 9931.225: 71.0622% ( 6) 00:09:07.865 9931.225 - 9981.637: 71.0832% ( 3) 00:09:07.865 9981.637 - 10032.049: 71.1043% ( 3) 00:09:07.865 10032.049 - 10082.462: 71.1253% ( 3) 00:09:07.865 10082.462 - 10132.874: 71.1533% ( 4) 00:09:07.865 10132.874 - 10183.286: 71.1743% ( 3) 00:09:07.865 10183.286 - 10233.698: 71.2024% ( 4) 00:09:07.865 10233.698 - 10284.111: 71.2234% ( 3) 00:09:07.865 10284.111 - 10334.523: 71.2514% ( 4) 00:09:07.865 10334.523 - 10384.935: 71.2724% ( 3) 00:09:07.865 10384.935 - 10435.348: 71.2934% ( 3) 00:09:07.865 10435.348 - 10485.760: 71.3004% ( 1) 00:09:07.865 10636.997 - 10687.409: 71.3285% ( 4) 00:09:07.865 10687.409 - 10737.822: 71.3495% ( 3) 00:09:07.865 10737.822 - 10788.234: 71.3705% ( 3) 00:09:07.865 10788.234 - 10838.646: 71.3985% ( 4) 00:09:07.865 10838.646 - 10889.058: 71.4196% ( 3) 00:09:07.865 10889.058 - 10939.471: 71.4686% ( 7) 00:09:07.865 10939.471 - 10989.883: 71.5177% ( 7) 00:09:07.865 10989.883 - 11040.295: 71.5667% ( 7) 00:09:07.865 11040.295 - 11090.708: 71.6087% ( 6) 00:09:07.865 11090.708 - 11141.120: 71.6508% ( 6) 00:09:07.865 11141.120 - 11191.532: 71.7209% ( 10) 00:09:07.865 11191.532 - 11241.945: 71.7839% ( 9) 00:09:07.865 11241.945 - 11292.357: 71.8680% ( 12) 00:09:07.865 11292.357 - 11342.769: 71.9311% ( 9) 00:09:07.865 11342.769 - 11393.182: 71.9941% ( 9) 00:09:07.865 11393.182 - 11443.594: 72.0642% ( 10) 00:09:07.865 11443.594 - 11494.006: 72.1413% ( 11) 00:09:07.865 11494.006 - 11544.418: 72.2393% ( 14) 00:09:07.865 11544.418 - 11594.831: 72.3515% ( 16) 00:09:07.865 11594.831 - 11645.243: 72.4285% ( 11) 00:09:07.865 11645.243 - 11695.655: 72.5336% ( 15) 00:09:07.865 11695.655 - 11746.068: 72.6317% ( 14) 00:09:07.865 11746.068 - 11796.480: 72.7649% ( 19) 00:09:07.865 11796.480 - 11846.892: 72.8840% ( 17) 00:09:07.865 11846.892 - 11897.305: 73.0031% ( 17) 00:09:07.865 11897.305 - 11947.717: 73.1222% ( 17) 00:09:07.865 11947.717 - 11998.129: 73.2483% ( 18) 00:09:07.865 11998.129 - 12048.542: 73.3814% ( 19) 00:09:07.865 12048.542 - 12098.954: 73.5146% ( 19) 00:09:07.865 12098.954 - 12149.366: 73.6267% ( 16) 00:09:07.865 12149.366 - 12199.778: 73.7318% ( 15) 00:09:07.865 12199.778 - 12250.191: 73.8439% ( 16) 00:09:07.865 12250.191 - 12300.603: 73.9630% ( 17) 00:09:07.865 12300.603 - 12351.015: 74.1382% ( 25) 00:09:07.865 12351.015 - 12401.428: 74.3554% ( 31) 00:09:07.865 12401.428 - 12451.840: 74.5796% ( 32) 00:09:07.865 12451.840 - 12502.252: 74.8178% ( 34) 00:09:07.866 12502.252 - 12552.665: 75.0841% ( 38) 00:09:07.866 12552.665 - 12603.077: 75.3433% ( 37) 00:09:07.866 12603.077 - 12653.489: 75.6096% ( 38) 00:09:07.866 12653.489 - 12703.902: 75.8758% ( 38) 00:09:07.866 12703.902 - 12754.314: 76.1141% ( 34) 00:09:07.866 12754.314 - 12804.726: 76.3593% ( 35) 00:09:07.866 12804.726 - 12855.138: 76.6186% ( 37) 00:09:07.866 12855.138 - 12905.551: 76.9198% ( 43) 00:09:07.866 12905.551 - 13006.375: 77.5154% ( 85) 00:09:07.866 13006.375 - 13107.200: 78.2161% ( 100) 00:09:07.866 13107.200 - 13208.025: 78.9378% ( 103) 00:09:07.866 13208.025 - 13308.849: 79.8206% ( 126) 00:09:07.866 13308.849 - 13409.674: 80.6825% ( 123) 00:09:07.866 13409.674 - 13510.498: 81.4882% ( 115) 00:09:07.866 13510.498 - 13611.323: 82.3080% ( 117) 00:09:07.866 13611.323 - 13712.148: 83.1979% ( 127) 00:09:07.866 13712.148 - 13812.972: 84.1158% ( 131) 00:09:07.866 13812.972 - 13913.797: 84.9636% ( 121) 00:09:07.866 13913.797 - 14014.622: 85.7693% ( 115) 00:09:07.866 14014.622 - 14115.446: 86.5541% ( 112) 00:09:07.866 14115.446 - 14216.271: 87.2408% ( 98) 00:09:07.866 14216.271 - 14317.095: 87.8223% ( 83) 00:09:07.866 14317.095 - 14417.920: 88.3548% ( 76) 00:09:07.866 14417.920 - 14518.745: 88.8383% ( 69) 00:09:07.866 14518.745 - 14619.569: 89.3358% ( 71) 00:09:07.866 14619.569 - 14720.394: 89.6791% ( 49) 00:09:07.866 14720.394 - 14821.218: 89.9944% ( 45) 00:09:07.866 14821.218 - 14922.043: 90.3447% ( 50) 00:09:07.866 14922.043 - 15022.868: 90.6811% ( 48) 00:09:07.866 15022.868 - 15123.692: 91.0594% ( 54) 00:09:07.866 15123.692 - 15224.517: 91.4308% ( 53) 00:09:07.866 15224.517 - 15325.342: 91.8372% ( 58) 00:09:07.866 15325.342 - 15426.166: 92.2225% ( 55) 00:09:07.866 15426.166 - 15526.991: 92.6499% ( 61) 00:09:07.866 15526.991 - 15627.815: 93.0003% ( 50) 00:09:07.866 15627.815 - 15728.640: 93.3366% ( 48) 00:09:07.866 15728.640 - 15829.465: 93.7150% ( 54) 00:09:07.866 15829.465 - 15930.289: 94.1214% ( 58) 00:09:07.866 15930.289 - 16031.114: 94.5067% ( 55) 00:09:07.866 16031.114 - 16131.938: 94.8781% ( 53) 00:09:07.866 16131.938 - 16232.763: 95.2004% ( 46) 00:09:07.866 16232.763 - 16333.588: 95.5367% ( 48) 00:09:07.866 16333.588 - 16434.412: 95.8310% ( 42) 00:09:07.866 16434.412 - 16535.237: 96.0762% ( 35) 00:09:07.866 16535.237 - 16636.062: 96.3145% ( 34) 00:09:07.866 16636.062 - 16736.886: 96.5597% ( 35) 00:09:07.866 16736.886 - 16837.711: 96.7559% ( 28) 00:09:07.866 16837.711 - 16938.535: 96.9170% ( 23) 00:09:07.866 16938.535 - 17039.360: 97.0712% ( 22) 00:09:07.866 17039.360 - 17140.185: 97.1483% ( 11) 00:09:07.866 17140.185 - 17241.009: 97.2323% ( 12) 00:09:07.866 17241.009 - 17341.834: 97.3164% ( 12) 00:09:07.866 17341.834 - 17442.658: 97.3865% ( 10) 00:09:07.866 17442.658 - 17543.483: 97.4636% ( 11) 00:09:07.866 17543.483 - 17644.308: 97.5476% ( 12) 00:09:07.866 17644.308 - 17745.132: 97.5897% ( 6) 00:09:07.866 17745.132 - 17845.957: 97.6107% ( 3) 00:09:07.866 17845.957 - 17946.782: 97.6387% ( 4) 00:09:07.866 17946.782 - 18047.606: 97.6598% ( 3) 00:09:07.866 18047.606 - 18148.431: 97.6878% ( 4) 00:09:07.866 18148.431 - 18249.255: 97.7088% ( 3) 00:09:07.866 18249.255 - 18350.080: 97.7298% ( 3) 00:09:07.866 18350.080 - 18450.905: 97.7578% ( 4) 00:09:07.866 18450.905 - 18551.729: 97.7719% ( 2) 00:09:07.866 18551.729 - 18652.554: 97.8349% ( 9) 00:09:07.866 18652.554 - 18753.378: 97.8629% ( 4) 00:09:07.866 18753.378 - 18854.203: 97.8770% ( 2) 00:09:07.866 18854.203 - 18955.028: 97.8980% ( 3) 00:09:07.866 18955.028 - 19055.852: 97.9330% ( 5) 00:09:07.866 19055.852 - 19156.677: 97.9751% ( 6) 00:09:07.866 19156.677 - 19257.502: 98.0171% ( 6) 00:09:07.866 19257.502 - 19358.326: 98.0591% ( 6) 00:09:07.866 19358.326 - 19459.151: 98.1152% ( 8) 00:09:07.866 19459.151 - 19559.975: 98.2133% ( 14) 00:09:07.866 19559.975 - 19660.800: 98.2974% ( 12) 00:09:07.866 19660.800 - 19761.625: 98.3885% ( 13) 00:09:07.866 19761.625 - 19862.449: 98.4795% ( 13) 00:09:07.866 19862.449 - 19963.274: 98.5286% ( 7) 00:09:07.866 19963.274 - 20064.098: 98.5846% ( 8) 00:09:07.866 20064.098 - 20164.923: 98.6617% ( 11) 00:09:07.866 20164.923 - 20265.748: 98.7318% ( 10) 00:09:07.866 20265.748 - 20366.572: 98.8018% ( 10) 00:09:07.866 20366.572 - 20467.397: 98.8789% ( 11) 00:09:07.866 20467.397 - 20568.222: 98.9420% ( 9) 00:09:07.866 20568.222 - 20669.046: 99.0261% ( 12) 00:09:07.866 20669.046 - 20769.871: 99.0961% ( 10) 00:09:07.866 20769.871 - 20870.695: 99.1732% ( 11) 00:09:07.866 20870.695 - 20971.520: 99.2433% ( 10) 00:09:07.866 20971.520 - 21072.345: 99.3133% ( 10) 00:09:07.866 21072.345 - 21173.169: 99.3904% ( 11) 00:09:07.866 21173.169 - 21273.994: 99.4535% ( 9) 00:09:07.866 21273.994 - 21374.818: 99.5025% ( 7) 00:09:07.866 21374.818 - 21475.643: 99.5305% ( 4) 00:09:07.866 21475.643 - 21576.468: 99.5516% ( 3) 00:09:07.866 27424.295 - 27625.945: 99.6006% ( 7) 00:09:07.866 27625.945 - 27827.594: 99.6847% ( 12) 00:09:07.866 27827.594 - 28029.243: 99.7688% ( 12) 00:09:07.866 28029.243 - 28230.892: 99.8529% ( 12) 00:09:07.866 28230.892 - 28432.542: 99.9299% ( 11) 00:09:07.866 28432.542 - 28634.191: 100.0000% ( 10) 00:09:07.866 00:09:07.866 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:07.866 ============================================================================== 00:09:07.866 Range in us Cumulative IO count 00:09:07.866 3453.243 - 3478.449: 0.0070% ( 1) 00:09:07.866 3478.449 - 3503.655: 0.0420% ( 5) 00:09:07.866 3503.655 - 3528.862: 0.0631% ( 3) 00:09:07.866 3528.862 - 3554.068: 0.0771% ( 2) 00:09:07.866 3554.068 - 3579.274: 0.0841% ( 1) 00:09:07.866 3579.274 - 3604.480: 0.0981% ( 2) 00:09:07.866 3604.480 - 3629.686: 0.1191% ( 3) 00:09:07.866 3629.686 - 3654.892: 0.1331% ( 2) 00:09:07.866 3654.892 - 3680.098: 0.1471% ( 2) 00:09:07.866 3680.098 - 3705.305: 0.1612% ( 2) 00:09:07.866 3705.305 - 3730.511: 0.1822% ( 3) 00:09:07.866 3730.511 - 3755.717: 0.1962% ( 2) 00:09:07.866 3755.717 - 3780.923: 0.2102% ( 2) 00:09:07.866 3780.923 - 3806.129: 0.2242% ( 2) 00:09:07.866 3806.129 - 3831.335: 0.2382% ( 2) 00:09:07.866 3831.335 - 3856.542: 0.2522% ( 2) 00:09:07.866 3856.542 - 3881.748: 0.2733% ( 3) 00:09:07.866 3881.748 - 3906.954: 0.2873% ( 2) 00:09:07.866 3906.954 - 3932.160: 0.3013% ( 2) 00:09:07.866 3932.160 - 3957.366: 0.3153% ( 2) 00:09:07.866 3957.366 - 3982.572: 0.3363% ( 3) 00:09:07.866 3982.572 - 4007.778: 0.3503% ( 2) 00:09:07.866 4007.778 - 4032.985: 0.3643% ( 2) 00:09:07.866 4032.985 - 4058.191: 0.3784% ( 2) 00:09:07.866 4058.191 - 4083.397: 0.3994% ( 3) 00:09:07.866 4083.397 - 4108.603: 0.4134% ( 2) 00:09:07.866 4108.603 - 4133.809: 0.4274% ( 2) 00:09:07.866 4133.809 - 4159.015: 0.4484% ( 3) 00:09:07.866 5066.437 - 5091.643: 0.4975% ( 7) 00:09:07.866 5091.643 - 5116.849: 0.5325% ( 5) 00:09:07.866 5116.849 - 5142.055: 0.5395% ( 1) 00:09:07.866 5142.055 - 5167.262: 0.5535% ( 2) 00:09:07.866 5167.262 - 5192.468: 0.5605% ( 1) 00:09:07.866 5192.468 - 5217.674: 0.5746% ( 2) 00:09:07.866 5217.674 - 5242.880: 0.5886% ( 2) 00:09:07.866 5242.880 - 5268.086: 0.6026% ( 2) 00:09:07.866 5268.086 - 5293.292: 0.6236% ( 3) 00:09:07.866 5293.292 - 5318.498: 0.6376% ( 2) 00:09:07.866 5318.498 - 5343.705: 0.6516% ( 2) 00:09:07.866 5343.705 - 5368.911: 0.6656% ( 2) 00:09:07.866 5368.911 - 5394.117: 0.6867% ( 3) 00:09:07.867 5394.117 - 5419.323: 0.7007% ( 2) 00:09:07.867 5419.323 - 5444.529: 0.7147% ( 2) 00:09:07.867 5444.529 - 5469.735: 0.7357% ( 3) 00:09:07.867 5469.735 - 5494.942: 0.7497% ( 2) 00:09:07.867 5494.942 - 5520.148: 0.7637% ( 2) 00:09:07.867 5520.148 - 5545.354: 0.7848% ( 3) 00:09:07.867 5545.354 - 5570.560: 0.7988% ( 2) 00:09:07.867 5570.560 - 5595.766: 0.8128% ( 2) 00:09:07.867 5595.766 - 5620.972: 0.8338% ( 3) 00:09:07.867 5620.972 - 5646.178: 0.8478% ( 2) 00:09:07.867 5646.178 - 5671.385: 0.8618% ( 2) 00:09:07.867 5671.385 - 5696.591: 0.8758% ( 2) 00:09:07.867 5696.591 - 5721.797: 0.8899% ( 2) 00:09:07.867 5721.797 - 5747.003: 0.8969% ( 1) 00:09:07.867 5923.446 - 5948.652: 0.9109% ( 2) 00:09:07.867 5948.652 - 5973.858: 0.9179% ( 1) 00:09:07.867 5973.858 - 5999.065: 1.0160% ( 14) 00:09:07.867 5999.065 - 6024.271: 1.3313% ( 45) 00:09:07.867 6024.271 - 6049.477: 1.8778% ( 78) 00:09:07.867 6049.477 - 6074.683: 2.9498% ( 153) 00:09:07.867 6074.683 - 6099.889: 4.4843% ( 219) 00:09:07.867 6099.889 - 6125.095: 6.0328% ( 221) 00:09:07.867 6125.095 - 6150.302: 7.7494% ( 245) 00:09:07.867 6150.302 - 6175.508: 9.5782% ( 261) 00:09:07.867 6175.508 - 6200.714: 11.2598% ( 240) 00:09:07.867 6200.714 - 6225.920: 13.0675% ( 258) 00:09:07.867 6225.920 - 6251.126: 14.7141% ( 235) 00:09:07.867 6251.126 - 6276.332: 16.3187% ( 229) 00:09:07.867 6276.332 - 6301.538: 17.9793% ( 237) 00:09:07.867 6301.538 - 6326.745: 19.6889% ( 244) 00:09:07.867 6326.745 - 6351.951: 21.4266% ( 248) 00:09:07.867 6351.951 - 6377.157: 23.4235% ( 285) 00:09:07.867 6377.157 - 6402.363: 25.1682% ( 249) 00:09:07.867 6402.363 - 6427.569: 27.0109% ( 263) 00:09:07.867 6427.569 - 6452.775: 28.6855% ( 239) 00:09:07.867 6452.775 - 6503.188: 32.1679% ( 497) 00:09:07.867 6503.188 - 6553.600: 35.6572% ( 498) 00:09:07.867 6553.600 - 6604.012: 39.1606% ( 500) 00:09:07.867 6604.012 - 6654.425: 42.6499% ( 498) 00:09:07.867 6654.425 - 6704.837: 46.1953% ( 506) 00:09:07.867 6704.837 - 6755.249: 49.7548% ( 508) 00:09:07.867 6755.249 - 6805.662: 53.3282% ( 510) 00:09:07.867 6805.662 - 6856.074: 56.8596% ( 504) 00:09:07.867 6856.074 - 6906.486: 60.2508% ( 484) 00:09:07.867 6906.486 - 6956.898: 62.9414% ( 384) 00:09:07.867 6956.898 - 7007.311: 64.4829% ( 220) 00:09:07.867 7007.311 - 7057.723: 65.2116% ( 104) 00:09:07.867 7057.723 - 7108.135: 65.6250% ( 59) 00:09:07.867 7108.135 - 7158.548: 65.9683% ( 49) 00:09:07.867 7158.548 - 7208.960: 66.1715% ( 29) 00:09:07.867 7208.960 - 7259.372: 66.3047% ( 19) 00:09:07.867 7259.372 - 7309.785: 66.4308% ( 18) 00:09:07.867 7309.785 - 7360.197: 66.4798% ( 7) 00:09:07.867 7360.197 - 7410.609: 66.5569% ( 11) 00:09:07.867 7410.609 - 7461.022: 66.6480% ( 13) 00:09:07.867 7461.022 - 7511.434: 66.7531% ( 15) 00:09:07.867 7511.434 - 7561.846: 66.8442% ( 13) 00:09:07.867 7561.846 - 7612.258: 66.9353% ( 13) 00:09:07.867 7612.258 - 7662.671: 67.0123% ( 11) 00:09:07.867 7662.671 - 7713.083: 67.0824% ( 10) 00:09:07.867 7713.083 - 7763.495: 67.1525% ( 10) 00:09:07.867 7763.495 - 7813.908: 67.2295% ( 11) 00:09:07.867 7813.908 - 7864.320: 67.2996% ( 10) 00:09:07.867 7864.320 - 7914.732: 67.3767% ( 11) 00:09:07.867 7914.732 - 7965.145: 67.4397% ( 9) 00:09:07.867 7965.145 - 8015.557: 67.4888% ( 7) 00:09:07.867 8015.557 - 8065.969: 67.5378% ( 7) 00:09:07.867 8065.969 - 8116.382: 67.5869% ( 7) 00:09:07.867 8116.382 - 8166.794: 67.6289% ( 6) 00:09:07.867 8166.794 - 8217.206: 67.6920% ( 9) 00:09:07.867 8217.206 - 8267.618: 67.7550% ( 9) 00:09:07.867 8267.618 - 8318.031: 67.8391% ( 12) 00:09:07.867 8318.031 - 8368.443: 67.9793% ( 20) 00:09:07.867 8368.443 - 8418.855: 68.0703% ( 13) 00:09:07.867 8418.855 - 8469.268: 68.1614% ( 13) 00:09:07.867 8469.268 - 8519.680: 68.2665% ( 15) 00:09:07.867 8519.680 - 8570.092: 68.3576% ( 13) 00:09:07.867 8570.092 - 8620.505: 68.4627% ( 15) 00:09:07.867 8620.505 - 8670.917: 68.5468% ( 12) 00:09:07.867 8670.917 - 8721.329: 68.6519% ( 15) 00:09:07.867 8721.329 - 8771.742: 68.7710% ( 17) 00:09:07.867 8771.742 - 8822.154: 68.9182% ( 21) 00:09:07.867 8822.154 - 8872.566: 69.0653% ( 21) 00:09:07.867 8872.566 - 8922.978: 69.2124% ( 21) 00:09:07.867 8922.978 - 8973.391: 69.3456% ( 19) 00:09:07.867 8973.391 - 9023.803: 69.4927% ( 21) 00:09:07.867 9023.803 - 9074.215: 69.6399% ( 21) 00:09:07.867 9074.215 - 9124.628: 69.7870% ( 21) 00:09:07.867 9124.628 - 9175.040: 69.9061% ( 17) 00:09:07.867 9175.040 - 9225.452: 70.0042% ( 14) 00:09:07.867 9225.452 - 9275.865: 70.0813% ( 11) 00:09:07.867 9275.865 - 9326.277: 70.1513% ( 10) 00:09:07.867 9326.277 - 9376.689: 70.2354% ( 12) 00:09:07.867 9376.689 - 9427.102: 70.3055% ( 10) 00:09:07.867 9427.102 - 9477.514: 70.3756% ( 10) 00:09:07.867 9477.514 - 9527.926: 70.4456% ( 10) 00:09:07.867 9527.926 - 9578.338: 70.5227% ( 11) 00:09:07.867 9578.338 - 9628.751: 70.5928% ( 10) 00:09:07.867 9628.751 - 9679.163: 70.6839% ( 13) 00:09:07.867 9679.163 - 9729.575: 70.7259% ( 6) 00:09:07.867 9729.575 - 9779.988: 70.7749% ( 7) 00:09:07.867 9779.988 - 9830.400: 70.8170% ( 6) 00:09:07.867 9830.400 - 9880.812: 70.8730% ( 8) 00:09:07.867 9880.812 - 9931.225: 70.9151% ( 6) 00:09:07.867 9931.225 - 9981.637: 70.9641% ( 7) 00:09:07.867 9981.637 - 10032.049: 71.0132% ( 7) 00:09:07.867 10032.049 - 10082.462: 71.0552% ( 6) 00:09:07.867 10082.462 - 10132.874: 71.0832% ( 4) 00:09:07.867 10132.874 - 10183.286: 71.1043% ( 3) 00:09:07.867 10183.286 - 10233.698: 71.1323% ( 4) 00:09:07.867 10233.698 - 10284.111: 71.1533% ( 3) 00:09:07.867 10284.111 - 10334.523: 71.1743% ( 3) 00:09:07.867 10334.523 - 10384.935: 71.1953% ( 3) 00:09:07.867 10384.935 - 10435.348: 71.2234% ( 4) 00:09:07.867 10435.348 - 10485.760: 71.2724% ( 7) 00:09:07.867 10485.760 - 10536.172: 71.3635% ( 13) 00:09:07.867 10536.172 - 10586.585: 71.4336% ( 10) 00:09:07.867 10586.585 - 10636.997: 71.4756% ( 6) 00:09:07.867 10636.997 - 10687.409: 71.5247% ( 7) 00:09:07.867 10687.409 - 10737.822: 71.5737% ( 7) 00:09:07.867 10737.822 - 10788.234: 71.6158% ( 6) 00:09:07.867 10788.234 - 10838.646: 71.6578% ( 6) 00:09:07.867 10838.646 - 10889.058: 71.6998% ( 6) 00:09:07.867 10889.058 - 10939.471: 71.7419% ( 6) 00:09:07.867 10939.471 - 10989.883: 71.7979% ( 8) 00:09:07.867 10989.883 - 11040.295: 71.8540% ( 8) 00:09:07.867 11040.295 - 11090.708: 71.9311% ( 11) 00:09:07.867 11090.708 - 11141.120: 71.9941% ( 9) 00:09:07.867 11141.120 - 11191.532: 72.0642% ( 10) 00:09:07.867 11191.532 - 11241.945: 72.1272% ( 9) 00:09:07.867 11241.945 - 11292.357: 72.1973% ( 10) 00:09:07.867 11292.357 - 11342.769: 72.2884% ( 13) 00:09:07.867 11342.769 - 11393.182: 72.3935% ( 15) 00:09:07.867 11393.182 - 11443.594: 72.4566% ( 9) 00:09:07.867 11443.594 - 11494.006: 72.5126% ( 8) 00:09:07.867 11494.006 - 11544.418: 72.5827% ( 10) 00:09:07.867 11544.418 - 11594.831: 72.6387% ( 8) 00:09:07.867 11594.831 - 11645.243: 72.7158% ( 11) 00:09:07.867 11645.243 - 11695.655: 72.7719% ( 8) 00:09:07.867 11695.655 - 11746.068: 72.8279% ( 8) 00:09:07.867 11746.068 - 11796.480: 72.8980% ( 10) 00:09:07.867 11796.480 - 11846.892: 72.9821% ( 12) 00:09:07.867 11846.892 - 11897.305: 73.1082% ( 18) 00:09:07.867 11897.305 - 11947.717: 73.2273% ( 17) 00:09:07.867 11947.717 - 11998.129: 73.3534% ( 18) 00:09:07.867 11998.129 - 12048.542: 73.5076% ( 22) 00:09:07.867 12048.542 - 12098.954: 73.6617% ( 22) 00:09:07.867 12098.954 - 12149.366: 73.8509% ( 27) 00:09:07.867 12149.366 - 12199.778: 74.0191% ( 24) 00:09:07.867 12199.778 - 12250.191: 74.1662% ( 21) 00:09:07.867 12250.191 - 12300.603: 74.3274% ( 23) 00:09:07.867 12300.603 - 12351.015: 74.4815% ( 22) 00:09:07.868 12351.015 - 12401.428: 74.6216% ( 20) 00:09:07.868 12401.428 - 12451.840: 74.8809% ( 37) 00:09:07.868 12451.840 - 12502.252: 75.0981% ( 31) 00:09:07.868 12502.252 - 12552.665: 75.3433% ( 35) 00:09:07.868 12552.665 - 12603.077: 75.6516% ( 44) 00:09:07.868 12603.077 - 12653.489: 75.9179% ( 38) 00:09:07.868 12653.489 - 12703.902: 76.1631% ( 35) 00:09:07.868 12703.902 - 12754.314: 76.4294% ( 38) 00:09:07.868 12754.314 - 12804.726: 76.6606% ( 33) 00:09:07.868 12804.726 - 12855.138: 76.9479% ( 41) 00:09:07.868 12855.138 - 12905.551: 77.2351% ( 41) 00:09:07.868 12905.551 - 13006.375: 77.9078% ( 96) 00:09:07.868 13006.375 - 13107.200: 78.5874% ( 97) 00:09:07.868 13107.200 - 13208.025: 79.3512% ( 109) 00:09:07.868 13208.025 - 13308.849: 80.1359% ( 112) 00:09:07.868 13308.849 - 13409.674: 80.9137% ( 111) 00:09:07.868 13409.674 - 13510.498: 81.6844% ( 110) 00:09:07.868 13510.498 - 13611.323: 82.4341% ( 107) 00:09:07.868 13611.323 - 13712.148: 83.2329% ( 114) 00:09:07.868 13712.148 - 13812.972: 84.0737% ( 120) 00:09:07.868 13812.972 - 13913.797: 84.7954% ( 103) 00:09:07.868 13913.797 - 14014.622: 85.3700% ( 82) 00:09:07.868 14014.622 - 14115.446: 85.9585% ( 84) 00:09:07.868 14115.446 - 14216.271: 86.4980% ( 77) 00:09:07.868 14216.271 - 14317.095: 87.1216% ( 89) 00:09:07.868 14317.095 - 14417.920: 87.6331% ( 73) 00:09:07.868 14417.920 - 14518.745: 88.1797% ( 78) 00:09:07.868 14518.745 - 14619.569: 88.6001% ( 60) 00:09:07.868 14619.569 - 14720.394: 89.0625% ( 66) 00:09:07.868 14720.394 - 14821.218: 89.5740% ( 73) 00:09:07.868 14821.218 - 14922.043: 90.1836% ( 87) 00:09:07.868 14922.043 - 15022.868: 90.7301% ( 78) 00:09:07.868 15022.868 - 15123.692: 91.1575% ( 61) 00:09:07.868 15123.692 - 15224.517: 91.5359% ( 54) 00:09:07.868 15224.517 - 15325.342: 91.9002% ( 52) 00:09:07.868 15325.342 - 15426.166: 92.2786% ( 54) 00:09:07.868 15426.166 - 15526.991: 92.6289% ( 50) 00:09:07.868 15526.991 - 15627.815: 92.9863% ( 51) 00:09:07.868 15627.815 - 15728.640: 93.2876% ( 43) 00:09:07.868 15728.640 - 15829.465: 93.5328% ( 35) 00:09:07.868 15829.465 - 15930.289: 93.7710% ( 34) 00:09:07.868 15930.289 - 16031.114: 94.0653% ( 42) 00:09:07.868 16031.114 - 16131.938: 94.3666% ( 43) 00:09:07.868 16131.938 - 16232.763: 94.7239% ( 51) 00:09:07.868 16232.763 - 16333.588: 95.1443% ( 60) 00:09:07.868 16333.588 - 16434.412: 95.5577% ( 59) 00:09:07.868 16434.412 - 16535.237: 95.8240% ( 38) 00:09:07.868 16535.237 - 16636.062: 96.1393% ( 45) 00:09:07.868 16636.062 - 16736.886: 96.4686% ( 47) 00:09:07.868 16736.886 - 16837.711: 96.7769% ( 44) 00:09:07.868 16837.711 - 16938.535: 97.0432% ( 38) 00:09:07.868 16938.535 - 17039.360: 97.2113% ( 24) 00:09:07.868 17039.360 - 17140.185: 97.3865% ( 25) 00:09:07.868 17140.185 - 17241.009: 97.5056% ( 17) 00:09:07.868 17241.009 - 17341.834: 97.5897% ( 12) 00:09:07.868 17341.834 - 17442.658: 97.6457% ( 8) 00:09:07.868 17442.658 - 17543.483: 97.6878% ( 6) 00:09:07.868 17543.483 - 17644.308: 97.7158% ( 4) 00:09:07.868 17644.308 - 17745.132: 97.7438% ( 4) 00:09:07.868 17745.132 - 17845.957: 97.7578% ( 2) 00:09:07.868 18249.255 - 18350.080: 97.7649% ( 1) 00:09:07.868 18350.080 - 18450.905: 97.8069% ( 6) 00:09:07.868 18450.905 - 18551.729: 97.8489% ( 6) 00:09:07.868 18551.729 - 18652.554: 97.9330% ( 12) 00:09:07.868 18652.554 - 18753.378: 97.9891% ( 8) 00:09:07.868 18753.378 - 18854.203: 98.0241% ( 5) 00:09:07.868 18854.203 - 18955.028: 98.0381% ( 2) 00:09:07.868 18955.028 - 19055.852: 98.0802% ( 6) 00:09:07.868 19055.852 - 19156.677: 98.1222% ( 6) 00:09:07.868 19156.677 - 19257.502: 98.1642% ( 6) 00:09:07.868 19257.502 - 19358.326: 98.2343% ( 10) 00:09:07.868 19358.326 - 19459.151: 98.2623% ( 4) 00:09:07.868 19459.151 - 19559.975: 98.2834% ( 3) 00:09:07.868 19559.975 - 19660.800: 98.3114% ( 4) 00:09:07.868 19660.800 - 19761.625: 98.3674% ( 8) 00:09:07.868 19761.625 - 19862.449: 98.4515% ( 12) 00:09:07.868 19862.449 - 19963.274: 98.5566% ( 15) 00:09:07.868 19963.274 - 20064.098: 98.6197% ( 9) 00:09:07.868 20064.098 - 20164.923: 98.6687% ( 7) 00:09:07.868 20164.923 - 20265.748: 98.7458% ( 11) 00:09:07.868 20265.748 - 20366.572: 98.8089% ( 9) 00:09:07.868 20366.572 - 20467.397: 98.8929% ( 12) 00:09:07.868 20467.397 - 20568.222: 98.9630% ( 10) 00:09:07.868 20568.222 - 20669.046: 99.0191% ( 8) 00:09:07.868 20669.046 - 20769.871: 99.0821% ( 9) 00:09:07.868 20769.871 - 20870.695: 99.1522% ( 10) 00:09:07.868 20870.695 - 20971.520: 99.2152% ( 9) 00:09:07.868 20971.520 - 21072.345: 99.2783% ( 9) 00:09:07.868 21072.345 - 21173.169: 99.3203% ( 6) 00:09:07.868 21173.169 - 21273.994: 99.3694% ( 7) 00:09:07.868 21273.994 - 21374.818: 99.4184% ( 7) 00:09:07.868 21374.818 - 21475.643: 99.4605% ( 6) 00:09:07.868 21475.643 - 21576.468: 99.5095% ( 7) 00:09:07.868 21576.468 - 21677.292: 99.5516% ( 6) 00:09:07.868 27020.997 - 27222.646: 99.5586% ( 1) 00:09:07.868 27222.646 - 27424.295: 99.6006% ( 6) 00:09:07.868 27424.295 - 27625.945: 99.6567% ( 8) 00:09:07.868 27625.945 - 27827.594: 99.7127% ( 8) 00:09:07.868 27827.594 - 28029.243: 99.7758% ( 9) 00:09:07.868 28029.243 - 28230.892: 99.8529% ( 11) 00:09:07.868 28230.892 - 28432.542: 99.9369% ( 12) 00:09:07.868 28432.542 - 28634.191: 100.0000% ( 9) 00:09:07.868 00:09:07.868 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:07.868 ============================================================================== 00:09:07.868 Range in us Cumulative IO count 00:09:07.868 3251.594 - 3276.800: 0.0490% ( 7) 00:09:07.868 3276.800 - 3302.006: 0.0701% ( 3) 00:09:07.868 3302.006 - 3327.212: 0.0981% ( 4) 00:09:07.868 3327.212 - 3352.418: 0.1051% ( 1) 00:09:07.868 3377.625 - 3402.831: 0.1191% ( 2) 00:09:07.868 3402.831 - 3428.037: 0.1331% ( 2) 00:09:07.868 3428.037 - 3453.243: 0.1471% ( 2) 00:09:07.868 3453.243 - 3478.449: 0.1612% ( 2) 00:09:07.868 3478.449 - 3503.655: 0.1752% ( 2) 00:09:07.868 3503.655 - 3528.862: 0.1892% ( 2) 00:09:07.868 3528.862 - 3554.068: 0.2032% ( 2) 00:09:07.868 3554.068 - 3579.274: 0.2242% ( 3) 00:09:07.868 3579.274 - 3604.480: 0.2382% ( 2) 00:09:07.868 3604.480 - 3629.686: 0.2522% ( 2) 00:09:07.868 3629.686 - 3654.892: 0.2663% ( 2) 00:09:07.868 3654.892 - 3680.098: 0.2803% ( 2) 00:09:07.868 3680.098 - 3705.305: 0.2943% ( 2) 00:09:07.868 3705.305 - 3730.511: 0.3083% ( 2) 00:09:07.868 3730.511 - 3755.717: 0.3293% ( 3) 00:09:07.868 3755.717 - 3780.923: 0.3433% ( 2) 00:09:07.868 3780.923 - 3806.129: 0.3573% ( 2) 00:09:07.868 3806.129 - 3831.335: 0.3714% ( 2) 00:09:07.868 3831.335 - 3856.542: 0.3854% ( 2) 00:09:07.868 3856.542 - 3881.748: 0.3994% ( 2) 00:09:07.868 3881.748 - 3906.954: 0.4134% ( 2) 00:09:07.868 3906.954 - 3932.160: 0.4274% ( 2) 00:09:07.868 3932.160 - 3957.366: 0.4484% ( 3) 00:09:07.868 4814.375 - 4839.582: 0.4624% ( 2) 00:09:07.868 4839.582 - 4864.788: 0.4975% ( 5) 00:09:07.868 4864.788 - 4889.994: 0.5115% ( 2) 00:09:07.868 4889.994 - 4915.200: 0.5185% ( 1) 00:09:07.868 4915.200 - 4940.406: 0.5325% ( 2) 00:09:07.868 4940.406 - 4965.612: 0.5535% ( 3) 00:09:07.868 4965.612 - 4990.818: 0.5675% ( 2) 00:09:07.868 4990.818 - 5016.025: 0.5746% ( 1) 00:09:07.868 5016.025 - 5041.231: 0.6026% ( 4) 00:09:07.868 5041.231 - 5066.437: 0.6166% ( 2) 00:09:07.868 5066.437 - 5091.643: 0.6306% ( 2) 00:09:07.868 5091.643 - 5116.849: 0.6446% ( 2) 00:09:07.868 5116.849 - 5142.055: 0.6586% ( 2) 00:09:07.869 5142.055 - 5167.262: 0.6797% ( 3) 00:09:07.869 5167.262 - 5192.468: 0.6937% ( 2) 00:09:07.869 5192.468 - 5217.674: 0.7077% ( 2) 00:09:07.869 5217.674 - 5242.880: 0.7287% ( 3) 00:09:07.869 5242.880 - 5268.086: 0.7427% ( 2) 00:09:07.869 5268.086 - 5293.292: 0.7567% ( 2) 00:09:07.869 5293.292 - 5318.498: 0.7707% ( 2) 00:09:07.869 5318.498 - 5343.705: 0.7918% ( 3) 00:09:07.869 5343.705 - 5368.911: 0.8058% ( 2) 00:09:07.869 5368.911 - 5394.117: 0.8198% ( 2) 00:09:07.869 5394.117 - 5419.323: 0.8408% ( 3) 00:09:07.869 5419.323 - 5444.529: 0.8548% ( 2) 00:09:07.869 5444.529 - 5469.735: 0.8688% ( 2) 00:09:07.869 5469.735 - 5494.942: 0.8828% ( 2) 00:09:07.869 5494.942 - 5520.148: 0.8969% ( 2) 00:09:07.869 5948.652 - 5973.858: 0.9039% ( 1) 00:09:07.869 5973.858 - 5999.065: 1.0230% ( 17) 00:09:07.869 5999.065 - 6024.271: 1.3033% ( 40) 00:09:07.869 6024.271 - 6049.477: 1.9268% ( 89) 00:09:07.869 6049.477 - 6074.683: 2.6626% ( 105) 00:09:07.869 6074.683 - 6099.889: 3.7696% ( 158) 00:09:07.869 6099.889 - 6125.095: 5.6194% ( 264) 00:09:07.869 6125.095 - 6150.302: 7.7564% ( 305) 00:09:07.869 6150.302 - 6175.508: 9.8935% ( 305) 00:09:07.869 6175.508 - 6200.714: 11.7012% ( 258) 00:09:07.869 6200.714 - 6225.920: 13.3478% ( 235) 00:09:07.869 6225.920 - 6251.126: 14.9173% ( 224) 00:09:07.869 6251.126 - 6276.332: 16.5779% ( 237) 00:09:07.869 6276.332 - 6301.538: 18.2595% ( 240) 00:09:07.869 6301.538 - 6326.745: 19.8991% ( 234) 00:09:07.869 6326.745 - 6351.951: 21.6087% ( 244) 00:09:07.869 6351.951 - 6377.157: 23.3254% ( 245) 00:09:07.869 6377.157 - 6402.363: 25.1822% ( 265) 00:09:07.869 6402.363 - 6427.569: 26.8918% ( 244) 00:09:07.869 6427.569 - 6452.775: 28.6435% ( 250) 00:09:07.869 6452.775 - 6503.188: 32.3220% ( 525) 00:09:07.869 6503.188 - 6553.600: 35.9585% ( 519) 00:09:07.869 6553.600 - 6604.012: 39.5249% ( 509) 00:09:07.869 6604.012 - 6654.425: 43.0143% ( 498) 00:09:07.869 6654.425 - 6704.837: 46.5387% ( 503) 00:09:07.869 6704.837 - 6755.249: 50.0981% ( 508) 00:09:07.869 6755.249 - 6805.662: 53.6365% ( 505) 00:09:07.869 6805.662 - 6856.074: 57.1819% ( 506) 00:09:07.869 6856.074 - 6906.486: 60.5661% ( 483) 00:09:07.869 6906.486 - 6956.898: 63.2077% ( 377) 00:09:07.869 6956.898 - 7007.311: 64.7001% ( 213) 00:09:07.869 7007.311 - 7057.723: 65.3868% ( 98) 00:09:07.869 7057.723 - 7108.135: 65.7161% ( 47) 00:09:07.869 7108.135 - 7158.548: 66.0034% ( 41) 00:09:07.869 7158.548 - 7208.960: 66.2346% ( 33) 00:09:07.869 7208.960 - 7259.372: 66.3677% ( 19) 00:09:07.869 7259.372 - 7309.785: 66.4518% ( 12) 00:09:07.869 7309.785 - 7360.197: 66.5219% ( 10) 00:09:07.869 7360.197 - 7410.609: 66.5989% ( 11) 00:09:07.869 7410.609 - 7461.022: 66.6690% ( 10) 00:09:07.869 7461.022 - 7511.434: 66.7391% ( 10) 00:09:07.869 7511.434 - 7561.846: 66.8442% ( 15) 00:09:07.869 7561.846 - 7612.258: 66.9283% ( 12) 00:09:07.869 7612.258 - 7662.671: 67.0123% ( 12) 00:09:07.869 7662.671 - 7713.083: 67.0894% ( 11) 00:09:07.869 7713.083 - 7763.495: 67.1665% ( 11) 00:09:07.869 7763.495 - 7813.908: 67.2436% ( 11) 00:09:07.869 7813.908 - 7864.320: 67.3487% ( 15) 00:09:07.869 7864.320 - 7914.732: 67.4327% ( 12) 00:09:07.869 7914.732 - 7965.145: 67.5098% ( 11) 00:09:07.869 7965.145 - 8015.557: 67.6009% ( 13) 00:09:07.869 8015.557 - 8065.969: 67.7200% ( 17) 00:09:07.869 8065.969 - 8116.382: 67.8391% ( 17) 00:09:07.869 8116.382 - 8166.794: 67.9442% ( 15) 00:09:07.869 8166.794 - 8217.206: 68.0423% ( 14) 00:09:07.869 8217.206 - 8267.618: 68.1404% ( 14) 00:09:07.869 8267.618 - 8318.031: 68.2315% ( 13) 00:09:07.869 8318.031 - 8368.443: 68.3576% ( 18) 00:09:07.869 8368.443 - 8418.855: 68.4697% ( 16) 00:09:07.869 8418.855 - 8469.268: 68.5608% ( 13) 00:09:07.869 8469.268 - 8519.680: 68.6659% ( 15) 00:09:07.869 8519.680 - 8570.092: 68.7570% ( 13) 00:09:07.869 8570.092 - 8620.505: 68.8621% ( 15) 00:09:07.869 8620.505 - 8670.917: 68.9602% ( 14) 00:09:07.869 8670.917 - 8721.329: 69.0723% ( 16) 00:09:07.869 8721.329 - 8771.742: 69.1844% ( 16) 00:09:07.869 8771.742 - 8822.154: 69.2685% ( 12) 00:09:07.869 8822.154 - 8872.566: 69.3736% ( 15) 00:09:07.869 8872.566 - 8922.978: 69.4577% ( 12) 00:09:07.869 8922.978 - 8973.391: 69.5067% ( 7) 00:09:07.869 8973.391 - 9023.803: 69.5978% ( 13) 00:09:07.869 9023.803 - 9074.215: 69.6679% ( 10) 00:09:07.869 9074.215 - 9124.628: 69.7239% ( 8) 00:09:07.869 9124.628 - 9175.040: 69.8010% ( 11) 00:09:07.869 9175.040 - 9225.452: 69.8711% ( 10) 00:09:07.869 9225.452 - 9275.865: 69.9552% ( 12) 00:09:07.869 9275.865 - 9326.277: 70.0112% ( 8) 00:09:07.869 9326.277 - 9376.689: 70.0673% ( 8) 00:09:07.869 9376.689 - 9427.102: 70.1233% ( 8) 00:09:07.869 9427.102 - 9477.514: 70.1934% ( 10) 00:09:07.869 9477.514 - 9527.926: 70.2635% ( 10) 00:09:07.869 9527.926 - 9578.338: 70.3335% ( 10) 00:09:07.869 9578.338 - 9628.751: 70.4106% ( 11) 00:09:07.869 9628.751 - 9679.163: 70.4526% ( 6) 00:09:07.869 9679.163 - 9729.575: 70.5017% ( 7) 00:09:07.869 9729.575 - 9779.988: 70.5507% ( 7) 00:09:07.869 9779.988 - 9830.400: 70.5998% ( 7) 00:09:07.869 9830.400 - 9880.812: 70.6418% ( 6) 00:09:07.869 9880.812 - 9931.225: 70.6698% ( 4) 00:09:07.869 9931.225 - 9981.637: 70.7189% ( 7) 00:09:07.869 9981.637 - 10032.049: 70.7749% ( 8) 00:09:07.869 10032.049 - 10082.462: 70.8240% ( 7) 00:09:07.869 10082.462 - 10132.874: 70.8730% ( 7) 00:09:07.869 10132.874 - 10183.286: 70.9221% ( 7) 00:09:07.869 10183.286 - 10233.698: 70.9711% ( 7) 00:09:07.869 10233.698 - 10284.111: 71.0342% ( 9) 00:09:07.869 10284.111 - 10334.523: 71.0902% ( 8) 00:09:07.869 10334.523 - 10384.935: 71.1463% ( 8) 00:09:07.869 10384.935 - 10435.348: 71.1883% ( 6) 00:09:07.869 10435.348 - 10485.760: 71.2304% ( 6) 00:09:07.869 10485.760 - 10536.172: 71.2864% ( 8) 00:09:07.869 10536.172 - 10586.585: 71.3285% ( 6) 00:09:07.869 10586.585 - 10636.997: 71.3775% ( 7) 00:09:07.869 10636.997 - 10687.409: 71.4476% ( 10) 00:09:07.869 10687.409 - 10737.822: 71.5177% ( 10) 00:09:07.869 10737.822 - 10788.234: 71.5877% ( 10) 00:09:07.869 10788.234 - 10838.646: 71.6508% ( 9) 00:09:07.869 10838.646 - 10889.058: 71.6998% ( 7) 00:09:07.869 10889.058 - 10939.471: 71.7629% ( 9) 00:09:07.869 10939.471 - 10989.883: 71.8330% ( 10) 00:09:07.869 10989.883 - 11040.295: 71.9030% ( 10) 00:09:07.869 11040.295 - 11090.708: 71.9731% ( 10) 00:09:07.869 11090.708 - 11141.120: 72.0362% ( 9) 00:09:07.869 11141.120 - 11191.532: 72.1062% ( 10) 00:09:07.869 11191.532 - 11241.945: 72.1553% ( 7) 00:09:07.869 11241.945 - 11292.357: 72.1973% ( 6) 00:09:07.869 11292.357 - 11342.769: 72.2393% ( 6) 00:09:07.869 11342.769 - 11393.182: 72.2814% ( 6) 00:09:07.869 11393.182 - 11443.594: 72.3445% ( 9) 00:09:07.869 11443.594 - 11494.006: 72.5056% ( 23) 00:09:07.869 11494.006 - 11544.418: 72.6317% ( 18) 00:09:07.869 11544.418 - 11594.831: 72.7438% ( 16) 00:09:07.869 11594.831 - 11645.243: 72.8419% ( 14) 00:09:07.869 11645.243 - 11695.655: 72.9540% ( 16) 00:09:07.869 11695.655 - 11746.068: 73.0872% ( 19) 00:09:07.869 11746.068 - 11796.480: 73.1853% ( 14) 00:09:07.869 11796.480 - 11846.892: 73.3114% ( 18) 00:09:07.869 11846.892 - 11897.305: 73.4655% ( 22) 00:09:07.869 11897.305 - 11947.717: 73.6057% ( 20) 00:09:07.869 11947.717 - 11998.129: 73.7528% ( 21) 00:09:07.869 11998.129 - 12048.542: 73.9070% ( 22) 00:09:07.869 12048.542 - 12098.954: 74.0191% ( 16) 00:09:07.869 12098.954 - 12149.366: 74.1172% ( 14) 00:09:07.869 12149.366 - 12199.778: 74.2152% ( 14) 00:09:07.869 12199.778 - 12250.191: 74.3274% ( 16) 00:09:07.869 12250.191 - 12300.603: 74.4325% ( 15) 00:09:07.869 12300.603 - 12351.015: 74.5446% ( 16) 00:09:07.869 12351.015 - 12401.428: 74.6847% ( 20) 00:09:07.870 12401.428 - 12451.840: 74.8248% ( 20) 00:09:07.870 12451.840 - 12502.252: 75.0210% ( 28) 00:09:07.870 12502.252 - 12552.665: 75.3433% ( 46) 00:09:07.870 12552.665 - 12603.077: 75.5675% ( 32) 00:09:07.870 12603.077 - 12653.489: 75.8408% ( 39) 00:09:07.870 12653.489 - 12703.902: 76.0790% ( 34) 00:09:07.870 12703.902 - 12754.314: 76.3943% ( 45) 00:09:07.870 12754.314 - 12804.726: 76.7096% ( 45) 00:09:07.870 12804.726 - 12855.138: 76.9969% ( 41) 00:09:07.870 12855.138 - 12905.551: 77.2422% ( 35) 00:09:07.870 12905.551 - 13006.375: 77.7957% ( 79) 00:09:07.870 13006.375 - 13107.200: 78.6015% ( 115) 00:09:07.870 13107.200 - 13208.025: 79.3512% ( 107) 00:09:07.870 13208.025 - 13308.849: 80.2130% ( 123) 00:09:07.870 13308.849 - 13409.674: 81.2150% ( 143) 00:09:07.870 13409.674 - 13510.498: 82.1399% ( 132) 00:09:07.870 13510.498 - 13611.323: 82.9877% ( 121) 00:09:07.870 13611.323 - 13712.148: 83.7794% ( 113) 00:09:07.870 13712.148 - 13812.972: 84.5362% ( 108) 00:09:07.870 13812.972 - 13913.797: 85.3209% ( 112) 00:09:07.870 13913.797 - 14014.622: 85.9655% ( 92) 00:09:07.870 14014.622 - 14115.446: 86.5821% ( 88) 00:09:07.870 14115.446 - 14216.271: 87.1777% ( 85) 00:09:07.870 14216.271 - 14317.095: 87.7733% ( 85) 00:09:07.870 14317.095 - 14417.920: 88.1867% ( 59) 00:09:07.870 14417.920 - 14518.745: 88.6141% ( 61) 00:09:07.870 14518.745 - 14619.569: 88.9714% ( 51) 00:09:07.870 14619.569 - 14720.394: 89.4479% ( 68) 00:09:07.870 14720.394 - 14821.218: 89.8683% ( 60) 00:09:07.870 14821.218 - 14922.043: 90.2326% ( 52) 00:09:07.870 14922.043 - 15022.868: 90.6110% ( 54) 00:09:07.870 15022.868 - 15123.692: 91.0384% ( 61) 00:09:07.870 15123.692 - 15224.517: 91.4238% ( 55) 00:09:07.870 15224.517 - 15325.342: 91.8302% ( 58) 00:09:07.870 15325.342 - 15426.166: 92.1525% ( 46) 00:09:07.870 15426.166 - 15526.991: 92.5518% ( 57) 00:09:07.870 15526.991 - 15627.815: 93.0003% ( 64) 00:09:07.870 15627.815 - 15728.640: 93.3506% ( 50) 00:09:07.870 15728.640 - 15829.465: 93.7080% ( 51) 00:09:07.870 15829.465 - 15930.289: 94.0233% ( 45) 00:09:07.870 15930.289 - 16031.114: 94.3386% ( 45) 00:09:07.870 16031.114 - 16131.938: 94.6118% ( 39) 00:09:07.870 16131.938 - 16232.763: 94.8571% ( 35) 00:09:07.870 16232.763 - 16333.588: 95.0813% ( 32) 00:09:07.870 16333.588 - 16434.412: 95.2845% ( 29) 00:09:07.870 16434.412 - 16535.237: 95.4877% ( 29) 00:09:07.870 16535.237 - 16636.062: 95.6208% ( 19) 00:09:07.870 16636.062 - 16736.886: 95.8100% ( 27) 00:09:07.870 16736.886 - 16837.711: 95.9781% ( 24) 00:09:07.870 16837.711 - 16938.535: 96.1463% ( 24) 00:09:07.870 16938.535 - 17039.360: 96.4126% ( 38) 00:09:07.870 17039.360 - 17140.185: 96.5667% ( 22) 00:09:07.870 17140.185 - 17241.009: 96.7699% ( 29) 00:09:07.870 17241.009 - 17341.834: 96.9240% ( 22) 00:09:07.870 17341.834 - 17442.658: 97.0712% ( 21) 00:09:07.870 17442.658 - 17543.483: 97.2393% ( 24) 00:09:07.870 17543.483 - 17644.308: 97.3725% ( 19) 00:09:07.870 17644.308 - 17745.132: 97.4986% ( 18) 00:09:07.870 17745.132 - 17845.957: 97.6107% ( 16) 00:09:07.870 17845.957 - 17946.782: 97.6878% ( 11) 00:09:07.870 17946.782 - 18047.606: 97.7578% ( 10) 00:09:07.870 18450.905 - 18551.729: 97.7649% ( 1) 00:09:07.870 18551.729 - 18652.554: 97.7999% ( 5) 00:09:07.870 18652.554 - 18753.378: 97.8489% ( 7) 00:09:07.870 18753.378 - 18854.203: 97.9120% ( 9) 00:09:07.870 18854.203 - 18955.028: 97.9610% ( 7) 00:09:07.870 18955.028 - 19055.852: 98.0171% ( 8) 00:09:07.870 19055.852 - 19156.677: 98.0802% ( 9) 00:09:07.870 19156.677 - 19257.502: 98.1432% ( 9) 00:09:07.870 19257.502 - 19358.326: 98.2063% ( 9) 00:09:07.870 19358.326 - 19459.151: 98.2763% ( 10) 00:09:07.870 19459.151 - 19559.975: 98.3324% ( 8) 00:09:07.870 19559.975 - 19660.800: 98.3744% ( 6) 00:09:07.870 19660.800 - 19761.625: 98.4165% ( 6) 00:09:07.870 19761.625 - 19862.449: 98.4795% ( 9) 00:09:07.870 19862.449 - 19963.274: 98.5356% ( 8) 00:09:07.870 19963.274 - 20064.098: 98.6057% ( 10) 00:09:07.870 20064.098 - 20164.923: 98.6547% ( 7) 00:09:07.870 20164.923 - 20265.748: 98.7248% ( 10) 00:09:07.870 20265.748 - 20366.572: 98.7878% ( 9) 00:09:07.870 20366.572 - 20467.397: 98.8369% ( 7) 00:09:07.870 20467.397 - 20568.222: 98.9140% ( 11) 00:09:07.870 20568.222 - 20669.046: 98.9840% ( 10) 00:09:07.870 20669.046 - 20769.871: 99.0471% ( 9) 00:09:07.870 20769.871 - 20870.695: 99.1312% ( 12) 00:09:07.870 20870.695 - 20971.520: 99.2152% ( 12) 00:09:07.870 20971.520 - 21072.345: 99.2853% ( 10) 00:09:07.870 21072.345 - 21173.169: 99.3484% ( 9) 00:09:07.870 21173.169 - 21273.994: 99.4184% ( 10) 00:09:07.870 21273.994 - 21374.818: 99.4885% ( 10) 00:09:07.870 21374.818 - 21475.643: 99.5446% ( 8) 00:09:07.870 21475.643 - 21576.468: 99.5516% ( 1) 00:09:07.870 26617.698 - 26819.348: 99.5936% ( 6) 00:09:07.870 26819.348 - 27020.997: 99.6707% ( 11) 00:09:07.870 27020.997 - 27222.646: 99.7478% ( 11) 00:09:07.870 27222.646 - 27424.295: 99.8318% ( 12) 00:09:07.870 27424.295 - 27625.945: 99.9089% ( 11) 00:09:07.870 27625.945 - 27827.594: 99.9930% ( 12) 00:09:07.870 27827.594 - 28029.243: 100.0000% ( 1) 00:09:07.870 00:09:07.870 04:55:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:09.244 Initializing NVMe Controllers 00:09:09.244 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:09.244 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:09.244 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:09.244 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:09.244 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:09.244 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:09.244 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:09.244 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:09.244 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:09.244 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:09.244 Initialization complete. Launching workers. 00:09:09.244 ======================================================== 00:09:09.244 Latency(us) 00:09:09.244 Device Information : IOPS MiB/s Average min max 00:09:09.244 PCIE (0000:00:10.0) NSID 1 from core 0: 17566.73 205.86 7288.43 5033.88 26780.29 00:09:09.244 PCIE (0000:00:11.0) NSID 1 from core 0: 17566.73 205.86 7280.61 4966.54 26158.23 00:09:09.244 PCIE (0000:00:13.0) NSID 1 from core 0: 17566.73 205.86 7273.89 4630.18 25925.71 00:09:09.244 PCIE (0000:00:12.0) NSID 1 from core 0: 17566.73 205.86 7266.96 4331.27 25639.82 00:09:09.244 PCIE (0000:00:12.0) NSID 2 from core 0: 17566.73 205.86 7260.01 4110.08 25010.05 00:09:09.244 PCIE (0000:00:12.0) NSID 3 from core 0: 17566.73 205.86 7253.05 3876.38 24095.66 00:09:09.244 ======================================================== 00:09:09.244 Total : 105400.37 1235.16 7270.49 3876.38 26780.29 00:09:09.244 00:09:09.244 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:09.244 ================================================================================= 00:09:09.244 1.00000% : 6200.714us 00:09:09.244 10.00000% : 6553.600us 00:09:09.244 25.00000% : 6704.837us 00:09:09.244 50.00000% : 6956.898us 00:09:09.244 75.00000% : 7309.785us 00:09:09.244 90.00000% : 8166.794us 00:09:09.244 95.00000% : 9023.803us 00:09:09.244 98.00000% : 11645.243us 00:09:09.244 99.00000% : 12754.314us 00:09:09.244 99.50000% : 19559.975us 00:09:09.244 99.90000% : 26416.049us 00:09:09.244 99.99000% : 26819.348us 00:09:09.244 99.99900% : 26819.348us 00:09:09.244 99.99990% : 26819.348us 00:09:09.244 99.99999% : 26819.348us 00:09:09.244 00:09:09.244 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:09.244 ================================================================================= 00:09:09.245 1.00000% : 6427.569us 00:09:09.245 10.00000% : 6654.425us 00:09:09.245 25.00000% : 6755.249us 00:09:09.245 50.00000% : 6906.486us 00:09:09.245 75.00000% : 7259.372us 00:09:09.245 90.00000% : 8015.557us 00:09:09.245 95.00000% : 9124.628us 00:09:09.245 98.00000% : 11796.480us 00:09:09.245 99.00000% : 13208.025us 00:09:09.245 99.50000% : 19055.852us 00:09:09.245 99.90000% : 25811.102us 00:09:09.245 99.99000% : 26214.400us 00:09:09.245 99.99900% : 26214.400us 00:09:09.245 99.99990% : 26214.400us 00:09:09.245 99.99999% : 26214.400us 00:09:09.245 00:09:09.245 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:09.245 ================================================================================= 00:09:09.245 1.00000% : 6377.157us 00:09:09.245 10.00000% : 6654.425us 00:09:09.245 25.00000% : 6755.249us 00:09:09.245 50.00000% : 6906.486us 00:09:09.245 75.00000% : 7208.960us 00:09:09.245 90.00000% : 8065.969us 00:09:09.245 95.00000% : 9124.628us 00:09:09.245 98.00000% : 11796.480us 00:09:09.245 99.00000% : 13308.849us 00:09:09.245 99.50000% : 18955.028us 00:09:09.245 99.90000% : 25508.628us 00:09:09.245 99.99000% : 26012.751us 00:09:09.245 99.99900% : 26012.751us 00:09:09.245 99.99990% : 26012.751us 00:09:09.245 99.99999% : 26012.751us 00:09:09.245 00:09:09.245 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:09.245 ================================================================================= 00:09:09.245 1.00000% : 6351.951us 00:09:09.245 10.00000% : 6654.425us 00:09:09.245 25.00000% : 6755.249us 00:09:09.245 50.00000% : 6906.486us 00:09:09.245 75.00000% : 7208.960us 00:09:09.245 90.00000% : 8065.969us 00:09:09.245 95.00000% : 9124.628us 00:09:09.245 98.00000% : 11191.532us 00:09:09.245 99.00000% : 13308.849us 00:09:09.245 99.50000% : 18551.729us 00:09:09.245 99.90000% : 25306.978us 00:09:09.245 99.99000% : 25710.277us 00:09:09.245 99.99900% : 25710.277us 00:09:09.245 99.99990% : 25710.277us 00:09:09.245 99.99999% : 25710.277us 00:09:09.245 00:09:09.245 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:09.245 ================================================================================= 00:09:09.245 1.00000% : 6351.951us 00:09:09.245 10.00000% : 6654.425us 00:09:09.245 25.00000% : 6755.249us 00:09:09.245 50.00000% : 6906.486us 00:09:09.245 75.00000% : 7259.372us 00:09:09.245 90.00000% : 8065.969us 00:09:09.245 95.00000% : 9074.215us 00:09:09.245 98.00000% : 11141.120us 00:09:09.245 99.00000% : 13409.674us 00:09:09.245 99.50000% : 18450.905us 00:09:09.245 99.90000% : 24702.031us 00:09:09.245 99.99000% : 25004.505us 00:09:09.245 99.99900% : 25105.329us 00:09:09.245 99.99990% : 25105.329us 00:09:09.245 99.99999% : 25105.329us 00:09:09.245 00:09:09.245 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:09.245 ================================================================================= 00:09:09.245 1.00000% : 6351.951us 00:09:09.245 10.00000% : 6654.425us 00:09:09.245 25.00000% : 6755.249us 00:09:09.245 50.00000% : 6906.486us 00:09:09.245 75.00000% : 7208.960us 00:09:09.245 90.00000% : 8116.382us 00:09:09.245 95.00000% : 8973.391us 00:09:09.245 98.00000% : 11393.182us 00:09:09.245 99.00000% : 13308.849us 00:09:09.245 99.50000% : 18450.905us 00:09:09.245 99.90000% : 23794.609us 00:09:09.245 99.99000% : 24097.083us 00:09:09.245 99.99900% : 24097.083us 00:09:09.245 99.99990% : 24097.083us 00:09:09.245 99.99999% : 24097.083us 00:09:09.245 00:09:09.245 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:09.245 ============================================================================== 00:09:09.245 Range in us Cumulative IO count 00:09:09.245 5016.025 - 5041.231: 0.0284% ( 5) 00:09:09.245 5041.231 - 5066.437: 0.0398% ( 2) 00:09:09.245 5091.643 - 5116.849: 0.0455% ( 1) 00:09:09.245 5116.849 - 5142.055: 0.0511% ( 1) 00:09:09.245 5142.055 - 5167.262: 0.0625% ( 2) 00:09:09.245 5167.262 - 5192.468: 0.0682% ( 1) 00:09:09.245 5192.468 - 5217.674: 0.0852% ( 3) 00:09:09.245 5217.674 - 5242.880: 0.1023% ( 3) 00:09:09.245 5242.880 - 5268.086: 0.1193% ( 3) 00:09:09.245 5268.086 - 5293.292: 0.1420% ( 4) 00:09:09.245 5293.292 - 5318.498: 0.1477% ( 1) 00:09:09.245 5318.498 - 5343.705: 0.1534% ( 1) 00:09:09.245 5343.705 - 5368.911: 0.1591% ( 1) 00:09:09.245 5368.911 - 5394.117: 0.1648% ( 1) 00:09:09.245 5394.117 - 5419.323: 0.1705% ( 1) 00:09:09.245 5419.323 - 5444.529: 0.1818% ( 2) 00:09:09.245 5444.529 - 5469.735: 0.1875% ( 1) 00:09:09.245 5469.735 - 5494.942: 0.1932% ( 1) 00:09:09.245 5494.942 - 5520.148: 0.2045% ( 2) 00:09:09.245 5520.148 - 5545.354: 0.2216% ( 3) 00:09:09.245 5545.354 - 5570.560: 0.3523% ( 23) 00:09:09.245 5570.560 - 5595.766: 0.3636% ( 2) 00:09:09.245 5973.858 - 5999.065: 0.3693% ( 1) 00:09:09.245 5999.065 - 6024.271: 0.3807% ( 2) 00:09:09.245 6024.271 - 6049.477: 0.4148% ( 6) 00:09:09.245 6049.477 - 6074.683: 0.4886% ( 13) 00:09:09.245 6074.683 - 6099.889: 0.5625% ( 13) 00:09:09.245 6099.889 - 6125.095: 0.6761% ( 20) 00:09:09.245 6125.095 - 6150.302: 0.8182% ( 25) 00:09:09.245 6150.302 - 6175.508: 0.9375% ( 21) 00:09:09.245 6175.508 - 6200.714: 1.0795% ( 25) 00:09:09.245 6200.714 - 6225.920: 1.2727% ( 34) 00:09:09.245 6225.920 - 6251.126: 1.4432% ( 30) 00:09:09.245 6251.126 - 6276.332: 1.6420% ( 35) 00:09:09.245 6276.332 - 6301.538: 1.8636% ( 39) 00:09:09.245 6301.538 - 6326.745: 2.0852% ( 39) 00:09:09.245 6326.745 - 6351.951: 2.3693% ( 50) 00:09:09.245 6351.951 - 6377.157: 2.7841% ( 73) 00:09:09.245 6377.157 - 6402.363: 3.2898% ( 89) 00:09:09.245 6402.363 - 6427.569: 3.8636% ( 101) 00:09:09.245 6427.569 - 6452.775: 4.6932% ( 146) 00:09:09.245 6452.775 - 6503.188: 6.8864% ( 386) 00:09:09.245 6503.188 - 6553.600: 10.0170% ( 551) 00:09:09.245 6553.600 - 6604.012: 14.7727% ( 837) 00:09:09.245 6604.012 - 6654.425: 20.7898% ( 1059) 00:09:09.245 6654.425 - 6704.837: 27.6250% ( 1203) 00:09:09.245 6704.837 - 6755.249: 34.3409% ( 1182) 00:09:09.245 6755.249 - 6805.662: 40.2159% ( 1034) 00:09:09.245 6805.662 - 6856.074: 44.3068% ( 720) 00:09:09.245 6856.074 - 6906.486: 47.9886% ( 648) 00:09:09.245 6906.486 - 6956.898: 51.7159% ( 656) 00:09:09.245 6956.898 - 7007.311: 55.4943% ( 665) 00:09:09.245 7007.311 - 7057.723: 59.2443% ( 660) 00:09:09.245 7057.723 - 7108.135: 62.8295% ( 631) 00:09:09.245 7108.135 - 7158.548: 66.5114% ( 648) 00:09:09.245 7158.548 - 7208.960: 69.6648% ( 555) 00:09:09.245 7208.960 - 7259.372: 72.9261% ( 574) 00:09:09.245 7259.372 - 7309.785: 75.6364% ( 477) 00:09:09.245 7309.785 - 7360.197: 77.7557% ( 373) 00:09:09.245 7360.197 - 7410.609: 79.2955% ( 271) 00:09:09.245 7410.609 - 7461.022: 80.6364% ( 236) 00:09:09.245 7461.022 - 7511.434: 81.6705% ( 182) 00:09:09.245 7511.434 - 7561.846: 82.5114% ( 148) 00:09:09.245 7561.846 - 7612.258: 83.3466% ( 147) 00:09:09.245 7612.258 - 7662.671: 84.1420% ( 140) 00:09:09.245 7662.671 - 7713.083: 85.1534% ( 178) 00:09:09.245 7713.083 - 7763.495: 85.9943% ( 148) 00:09:09.245 7763.495 - 7813.908: 86.5852% ( 104) 00:09:09.245 7813.908 - 7864.320: 87.0795% ( 87) 00:09:09.245 7864.320 - 7914.732: 87.6989% ( 109) 00:09:09.245 7914.732 - 7965.145: 88.3580% ( 116) 00:09:09.245 7965.145 - 8015.557: 88.7557% ( 70) 00:09:09.245 8015.557 - 8065.969: 89.2557% ( 88) 00:09:09.245 8065.969 - 8116.382: 89.8580% ( 106) 00:09:09.245 8116.382 - 8166.794: 90.3807% ( 92) 00:09:09.245 8166.794 - 8217.206: 90.9830% ( 106) 00:09:09.245 8217.206 - 8267.618: 91.6136% ( 111) 00:09:09.245 8267.618 - 8318.031: 92.0455% ( 76) 00:09:09.245 8318.031 - 8368.443: 92.4830% ( 77) 00:09:09.245 8368.443 - 8418.855: 92.8409% ( 63) 00:09:09.245 8418.855 - 8469.268: 93.1477% ( 54) 00:09:09.245 8469.268 - 8519.680: 93.4091% ( 46) 00:09:09.246 8519.680 - 8570.092: 93.6420% ( 41) 00:09:09.246 8570.092 - 8620.505: 93.8523% ( 37) 00:09:09.246 8620.505 - 8670.917: 94.0227% ( 30) 00:09:09.246 8670.917 - 8721.329: 94.2443% ( 39) 00:09:09.246 8721.329 - 8771.742: 94.3182% ( 13) 00:09:09.246 8771.742 - 8822.154: 94.4489% ( 23) 00:09:09.246 8822.154 - 8872.566: 94.5455% ( 17) 00:09:09.246 8872.566 - 8922.978: 94.7784% ( 41) 00:09:09.246 8922.978 - 8973.391: 94.9886% ( 37) 00:09:09.246 8973.391 - 9023.803: 95.1648% ( 31) 00:09:09.246 9023.803 - 9074.215: 95.3466% ( 32) 00:09:09.246 9074.215 - 9124.628: 95.5284% ( 32) 00:09:09.246 9124.628 - 9175.040: 95.7330% ( 36) 00:09:09.246 9175.040 - 9225.452: 95.9091% ( 31) 00:09:09.246 9225.452 - 9275.865: 96.0909% ( 32) 00:09:09.246 9275.865 - 9326.277: 96.1761% ( 15) 00:09:09.246 9326.277 - 9376.689: 96.2784% ( 18) 00:09:09.246 9376.689 - 9427.102: 96.3523% ( 13) 00:09:09.246 9427.102 - 9477.514: 96.4318% ( 14) 00:09:09.246 9477.514 - 9527.926: 96.5000% ( 12) 00:09:09.246 9527.926 - 9578.338: 96.5568% ( 10) 00:09:09.246 9578.338 - 9628.751: 96.6307% ( 13) 00:09:09.246 9628.751 - 9679.163: 96.6818% ( 9) 00:09:09.246 9679.163 - 9729.575: 96.7500% ( 12) 00:09:09.246 9729.575 - 9779.988: 96.8068% ( 10) 00:09:09.246 9779.988 - 9830.400: 96.8466% ( 7) 00:09:09.246 9830.400 - 9880.812: 96.8636% ( 3) 00:09:09.246 9880.812 - 9931.225: 96.9034% ( 7) 00:09:09.246 9931.225 - 9981.637: 96.9716% ( 12) 00:09:09.246 9981.637 - 10032.049: 97.0682% ( 17) 00:09:09.246 10032.049 - 10082.462: 97.0739% ( 1) 00:09:09.246 10082.462 - 10132.874: 97.0852% ( 2) 00:09:09.246 10132.874 - 10183.286: 97.0909% ( 1) 00:09:09.246 10233.698 - 10284.111: 97.1023% ( 2) 00:09:09.246 10284.111 - 10334.523: 97.1136% ( 2) 00:09:09.246 10334.523 - 10384.935: 97.1193% ( 1) 00:09:09.246 10435.348 - 10485.760: 97.1364% ( 3) 00:09:09.246 10485.760 - 10536.172: 97.1761% ( 7) 00:09:09.246 10536.172 - 10586.585: 97.2102% ( 6) 00:09:09.246 10586.585 - 10636.997: 97.2386% ( 5) 00:09:09.246 10636.997 - 10687.409: 97.2727% ( 6) 00:09:09.246 10687.409 - 10737.822: 97.3125% ( 7) 00:09:09.246 10737.822 - 10788.234: 97.3352% ( 4) 00:09:09.246 10788.234 - 10838.646: 97.3523% ( 3) 00:09:09.246 10838.646 - 10889.058: 97.3750% ( 4) 00:09:09.246 10889.058 - 10939.471: 97.3920% ( 3) 00:09:09.246 10939.471 - 10989.883: 97.4261% ( 6) 00:09:09.246 10989.883 - 11040.295: 97.4432% ( 3) 00:09:09.246 11040.295 - 11090.708: 97.4602% ( 3) 00:09:09.246 11090.708 - 11141.120: 97.4830% ( 4) 00:09:09.246 11141.120 - 11191.532: 97.5284% ( 8) 00:09:09.246 11191.532 - 11241.945: 97.6250% ( 17) 00:09:09.246 11241.945 - 11292.357: 97.6761% ( 9) 00:09:09.246 11292.357 - 11342.769: 97.7102% ( 6) 00:09:09.246 11342.769 - 11393.182: 97.7557% ( 8) 00:09:09.246 11393.182 - 11443.594: 97.7955% ( 7) 00:09:09.246 11443.594 - 11494.006: 97.8239% ( 5) 00:09:09.246 11494.006 - 11544.418: 97.8523% ( 5) 00:09:09.246 11544.418 - 11594.831: 97.8864% ( 6) 00:09:09.246 11594.831 - 11645.243: 98.0341% ( 26) 00:09:09.246 11645.243 - 11695.655: 98.1364% ( 18) 00:09:09.246 11695.655 - 11746.068: 98.2216% ( 15) 00:09:09.246 11746.068 - 11796.480: 98.2670% ( 8) 00:09:09.246 11796.480 - 11846.892: 98.3125% ( 8) 00:09:09.246 11846.892 - 11897.305: 98.3352% ( 4) 00:09:09.246 11897.305 - 11947.717: 98.3750% ( 7) 00:09:09.246 11947.717 - 11998.129: 98.3977% ( 4) 00:09:09.246 11998.129 - 12048.542: 98.4318% ( 6) 00:09:09.246 12048.542 - 12098.954: 98.4545% ( 4) 00:09:09.246 12098.954 - 12149.366: 98.4943% ( 7) 00:09:09.246 12149.366 - 12199.778: 98.5682% ( 13) 00:09:09.246 12199.778 - 12250.191: 98.6420% ( 13) 00:09:09.246 12250.191 - 12300.603: 98.7159% ( 13) 00:09:09.246 12300.603 - 12351.015: 98.7386% ( 4) 00:09:09.246 12351.015 - 12401.428: 98.7841% ( 8) 00:09:09.246 12401.428 - 12451.840: 98.8125% ( 5) 00:09:09.246 12451.840 - 12502.252: 98.8466% ( 6) 00:09:09.246 12502.252 - 12552.665: 98.8750% ( 5) 00:09:09.246 12552.665 - 12603.077: 98.9148% ( 7) 00:09:09.246 12603.077 - 12653.489: 98.9432% ( 5) 00:09:09.246 12653.489 - 12703.902: 98.9886% ( 8) 00:09:09.246 12703.902 - 12754.314: 99.0170% ( 5) 00:09:09.246 12754.314 - 12804.726: 99.0284% ( 2) 00:09:09.246 12804.726 - 12855.138: 99.0511% ( 4) 00:09:09.246 12855.138 - 12905.551: 99.0739% ( 4) 00:09:09.246 12905.551 - 13006.375: 99.1307% ( 10) 00:09:09.246 13006.375 - 13107.200: 99.1818% ( 9) 00:09:09.246 13107.200 - 13208.025: 99.2159% ( 6) 00:09:09.246 13208.025 - 13308.849: 99.2273% ( 2) 00:09:09.246 13308.849 - 13409.674: 99.2443% ( 3) 00:09:09.246 13409.674 - 13510.498: 99.2500% ( 1) 00:09:09.246 13510.498 - 13611.323: 99.2670% ( 3) 00:09:09.246 13611.323 - 13712.148: 99.2727% ( 1) 00:09:09.246 18753.378 - 18854.203: 99.2784% ( 1) 00:09:09.246 18854.203 - 18955.028: 99.3864% ( 19) 00:09:09.246 18955.028 - 19055.852: 99.4034% ( 3) 00:09:09.246 19055.852 - 19156.677: 99.4148% ( 2) 00:09:09.246 19156.677 - 19257.502: 99.4375% ( 4) 00:09:09.246 19257.502 - 19358.326: 99.4545% ( 3) 00:09:09.246 19358.326 - 19459.151: 99.4830% ( 5) 00:09:09.246 19459.151 - 19559.975: 99.5057% ( 4) 00:09:09.246 19559.975 - 19660.800: 99.5341% ( 5) 00:09:09.246 19660.800 - 19761.625: 99.5568% ( 4) 00:09:09.246 19761.625 - 19862.449: 99.5852% ( 5) 00:09:09.246 19862.449 - 19963.274: 99.6136% ( 5) 00:09:09.246 19963.274 - 20064.098: 99.6193% ( 1) 00:09:09.246 20164.923 - 20265.748: 99.6250% ( 1) 00:09:09.246 20265.748 - 20366.572: 99.6364% ( 2) 00:09:09.246 25004.505 - 25105.329: 99.6477% ( 2) 00:09:09.246 25105.329 - 25206.154: 99.6705% ( 4) 00:09:09.246 25206.154 - 25306.978: 99.6875% ( 3) 00:09:09.246 25306.978 - 25407.803: 99.7045% ( 3) 00:09:09.246 25407.803 - 25508.628: 99.7273% ( 4) 00:09:09.246 25508.628 - 25609.452: 99.7443% ( 3) 00:09:09.246 25609.452 - 25710.277: 99.7727% ( 5) 00:09:09.246 25710.277 - 25811.102: 99.7955% ( 4) 00:09:09.246 25811.102 - 26012.751: 99.8352% ( 7) 00:09:09.246 26012.751 - 26214.400: 99.8807% ( 8) 00:09:09.246 26214.400 - 26416.049: 99.9261% ( 8) 00:09:09.246 26416.049 - 26617.698: 99.9716% ( 8) 00:09:09.246 26617.698 - 26819.348: 100.0000% ( 5) 00:09:09.246 00:09:09.246 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:09.246 ============================================================================== 00:09:09.246 Range in us Cumulative IO count 00:09:09.246 4965.612 - 4990.818: 0.0227% ( 4) 00:09:09.246 4990.818 - 5016.025: 0.0341% ( 2) 00:09:09.246 5016.025 - 5041.231: 0.0739% ( 7) 00:09:09.246 5041.231 - 5066.437: 0.1136% ( 7) 00:09:09.246 5066.437 - 5091.643: 0.1477% ( 6) 00:09:09.246 5091.643 - 5116.849: 0.1875% ( 7) 00:09:09.246 5116.849 - 5142.055: 0.2102% ( 4) 00:09:09.246 5142.055 - 5167.262: 0.2330% ( 4) 00:09:09.246 5167.262 - 5192.468: 0.3068% ( 13) 00:09:09.246 5192.468 - 5217.674: 0.3409% ( 6) 00:09:09.247 5217.674 - 5242.880: 0.3466% ( 1) 00:09:09.247 5242.880 - 5268.086: 0.3636% ( 3) 00:09:09.247 6024.271 - 6049.477: 0.3693% ( 1) 00:09:09.247 6099.889 - 6125.095: 0.3807% ( 2) 00:09:09.247 6175.508 - 6200.714: 0.3920% ( 2) 00:09:09.247 6200.714 - 6225.920: 0.4034% ( 2) 00:09:09.247 6225.920 - 6251.126: 0.4432% ( 7) 00:09:09.247 6251.126 - 6276.332: 0.4545% ( 2) 00:09:09.247 6276.332 - 6301.538: 0.5000% ( 8) 00:09:09.247 6301.538 - 6326.745: 0.5568% ( 10) 00:09:09.247 6326.745 - 6351.951: 0.6193% ( 11) 00:09:09.247 6351.951 - 6377.157: 0.7386% ( 21) 00:09:09.247 6377.157 - 6402.363: 0.9489% ( 37) 00:09:09.247 6402.363 - 6427.569: 1.2216% ( 48) 00:09:09.247 6427.569 - 6452.775: 1.6136% ( 69) 00:09:09.247 6452.775 - 6503.188: 3.1591% ( 272) 00:09:09.247 6503.188 - 6553.600: 4.5398% ( 243) 00:09:09.247 6553.600 - 6604.012: 8.0057% ( 610) 00:09:09.247 6604.012 - 6654.425: 12.5625% ( 802) 00:09:09.247 6654.425 - 6704.837: 17.7102% ( 906) 00:09:09.247 6704.837 - 6755.249: 25.9091% ( 1443) 00:09:09.247 6755.249 - 6805.662: 33.2500% ( 1292) 00:09:09.247 6805.662 - 6856.074: 42.9318% ( 1704) 00:09:09.247 6856.074 - 6906.486: 51.8864% ( 1576) 00:09:09.247 6906.486 - 6956.898: 58.8807% ( 1231) 00:09:09.247 6956.898 - 7007.311: 63.1648% ( 754) 00:09:09.247 7007.311 - 7057.723: 67.0170% ( 678) 00:09:09.247 7057.723 - 7108.135: 70.3807% ( 592) 00:09:09.247 7108.135 - 7158.548: 72.9773% ( 457) 00:09:09.247 7158.548 - 7208.960: 74.5114% ( 270) 00:09:09.247 7208.960 - 7259.372: 76.4602% ( 343) 00:09:09.247 7259.372 - 7309.785: 78.2102% ( 308) 00:09:09.247 7309.785 - 7360.197: 79.7670% ( 274) 00:09:09.247 7360.197 - 7410.609: 80.8466% ( 190) 00:09:09.247 7410.609 - 7461.022: 81.6534% ( 142) 00:09:09.247 7461.022 - 7511.434: 82.1761% ( 92) 00:09:09.247 7511.434 - 7561.846: 82.7898% ( 108) 00:09:09.247 7561.846 - 7612.258: 83.4602% ( 118) 00:09:09.247 7612.258 - 7662.671: 84.5739% ( 196) 00:09:09.247 7662.671 - 7713.083: 85.5966% ( 180) 00:09:09.247 7713.083 - 7763.495: 86.4034% ( 142) 00:09:09.247 7763.495 - 7813.908: 87.1705% ( 135) 00:09:09.247 7813.908 - 7864.320: 88.0852% ( 161) 00:09:09.247 7864.320 - 7914.732: 88.6023% ( 91) 00:09:09.247 7914.732 - 7965.145: 89.4659% ( 152) 00:09:09.247 7965.145 - 8015.557: 90.0398% ( 101) 00:09:09.247 8015.557 - 8065.969: 90.4773% ( 77) 00:09:09.247 8065.969 - 8116.382: 91.0057% ( 93) 00:09:09.247 8116.382 - 8166.794: 91.3807% ( 66) 00:09:09.247 8166.794 - 8217.206: 91.8807% ( 88) 00:09:09.247 8217.206 - 8267.618: 92.1818% ( 53) 00:09:09.247 8267.618 - 8318.031: 92.4943% ( 55) 00:09:09.247 8318.031 - 8368.443: 92.8977% ( 71) 00:09:09.247 8368.443 - 8418.855: 93.1761% ( 49) 00:09:09.247 8418.855 - 8469.268: 93.3864% ( 37) 00:09:09.247 8469.268 - 8519.680: 93.5284% ( 25) 00:09:09.247 8519.680 - 8570.092: 93.6534% ( 22) 00:09:09.247 8570.092 - 8620.505: 93.7955% ( 25) 00:09:09.247 8620.505 - 8670.917: 94.1136% ( 56) 00:09:09.247 8670.917 - 8721.329: 94.2330% ( 21) 00:09:09.247 8721.329 - 8771.742: 94.2898% ( 10) 00:09:09.247 8771.742 - 8822.154: 94.3352% ( 8) 00:09:09.247 8822.154 - 8872.566: 94.4489% ( 20) 00:09:09.247 8872.566 - 8922.978: 94.6591% ( 37) 00:09:09.247 8922.978 - 8973.391: 94.8239% ( 29) 00:09:09.247 8973.391 - 9023.803: 94.8920% ( 12) 00:09:09.247 9023.803 - 9074.215: 94.9659% ( 13) 00:09:09.247 9074.215 - 9124.628: 95.0739% ( 19) 00:09:09.247 9124.628 - 9175.040: 95.3864% ( 55) 00:09:09.247 9175.040 - 9225.452: 95.6080% ( 39) 00:09:09.247 9225.452 - 9275.865: 95.7955% ( 33) 00:09:09.247 9275.865 - 9326.277: 96.0057% ( 37) 00:09:09.247 9326.277 - 9376.689: 96.1705% ( 29) 00:09:09.247 9376.689 - 9427.102: 96.2500% ( 14) 00:09:09.247 9427.102 - 9477.514: 96.3636% ( 20) 00:09:09.247 9477.514 - 9527.926: 96.5398% ( 31) 00:09:09.247 9527.926 - 9578.338: 96.5966% ( 10) 00:09:09.247 9578.338 - 9628.751: 96.6534% ( 10) 00:09:09.247 9628.751 - 9679.163: 96.7045% ( 9) 00:09:09.247 9679.163 - 9729.575: 96.7670% ( 11) 00:09:09.247 9729.575 - 9779.988: 96.8295% ( 11) 00:09:09.247 9779.988 - 9830.400: 96.8807% ( 9) 00:09:09.247 9830.400 - 9880.812: 96.9261% ( 8) 00:09:09.247 9880.812 - 9931.225: 96.9716% ( 8) 00:09:09.247 9931.225 - 9981.637: 97.0057% ( 6) 00:09:09.247 9981.637 - 10032.049: 97.0227% ( 3) 00:09:09.247 10032.049 - 10082.462: 97.0398% ( 3) 00:09:09.247 10082.462 - 10132.874: 97.0568% ( 3) 00:09:09.247 10132.874 - 10183.286: 97.0682% ( 2) 00:09:09.247 10183.286 - 10233.698: 97.0795% ( 2) 00:09:09.247 10233.698 - 10284.111: 97.1648% ( 15) 00:09:09.247 10284.111 - 10334.523: 97.2557% ( 16) 00:09:09.247 10334.523 - 10384.935: 97.3239% ( 12) 00:09:09.247 10384.935 - 10435.348: 97.4148% ( 16) 00:09:09.247 10435.348 - 10485.760: 97.5284% ( 20) 00:09:09.247 10485.760 - 10536.172: 97.5966% ( 12) 00:09:09.247 10536.172 - 10586.585: 97.6534% ( 10) 00:09:09.247 10586.585 - 10636.997: 97.6818% ( 5) 00:09:09.247 10636.997 - 10687.409: 97.7045% ( 4) 00:09:09.247 10687.409 - 10737.822: 97.7273% ( 4) 00:09:09.247 10737.822 - 10788.234: 97.7443% ( 3) 00:09:09.247 10788.234 - 10838.646: 97.7670% ( 4) 00:09:09.247 10838.646 - 10889.058: 97.7784% ( 2) 00:09:09.247 10889.058 - 10939.471: 97.7898% ( 2) 00:09:09.247 10939.471 - 10989.883: 97.8011% ( 2) 00:09:09.247 10989.883 - 11040.295: 97.8125% ( 2) 00:09:09.247 11040.295 - 11090.708: 97.8182% ( 1) 00:09:09.247 11494.006 - 11544.418: 97.8239% ( 1) 00:09:09.247 11544.418 - 11594.831: 97.8409% ( 3) 00:09:09.247 11594.831 - 11645.243: 97.8864% ( 8) 00:09:09.247 11645.243 - 11695.655: 97.9261% ( 7) 00:09:09.247 11695.655 - 11746.068: 97.9489% ( 4) 00:09:09.247 11746.068 - 11796.480: 98.1364% ( 33) 00:09:09.247 11796.480 - 11846.892: 98.1648% ( 5) 00:09:09.247 11846.892 - 11897.305: 98.1875% ( 4) 00:09:09.247 11897.305 - 11947.717: 98.2045% ( 3) 00:09:09.247 11947.717 - 11998.129: 98.2614% ( 10) 00:09:09.247 11998.129 - 12048.542: 98.3352% ( 13) 00:09:09.247 12048.542 - 12098.954: 98.4261% ( 16) 00:09:09.247 12098.954 - 12149.366: 98.4716% ( 8) 00:09:09.247 12149.366 - 12199.778: 98.4886% ( 3) 00:09:09.247 12199.778 - 12250.191: 98.5057% ( 3) 00:09:09.247 12250.191 - 12300.603: 98.5227% ( 3) 00:09:09.247 12300.603 - 12351.015: 98.5455% ( 4) 00:09:09.247 12351.015 - 12401.428: 98.5739% ( 5) 00:09:09.247 12401.428 - 12451.840: 98.5909% ( 3) 00:09:09.247 12451.840 - 12502.252: 98.6193% ( 5) 00:09:09.247 12502.252 - 12552.665: 98.6420% ( 4) 00:09:09.247 12552.665 - 12603.077: 98.6534% ( 2) 00:09:09.247 12603.077 - 12653.489: 98.6648% ( 2) 00:09:09.247 12653.489 - 12703.902: 98.6875% ( 4) 00:09:09.247 12703.902 - 12754.314: 98.7273% ( 7) 00:09:09.247 12754.314 - 12804.726: 98.9034% ( 31) 00:09:09.247 12804.726 - 12855.138: 98.9318% ( 5) 00:09:09.247 12855.138 - 12905.551: 98.9545% ( 4) 00:09:09.247 12905.551 - 13006.375: 98.9773% ( 4) 00:09:09.247 13006.375 - 13107.200: 98.9886% ( 2) 00:09:09.247 13107.200 - 13208.025: 99.0114% ( 4) 00:09:09.247 13208.025 - 13308.849: 99.0284% ( 3) 00:09:09.247 13308.849 - 13409.674: 99.2443% ( 38) 00:09:09.247 13409.674 - 13510.498: 99.2614% ( 3) 00:09:09.247 13510.498 - 13611.323: 99.2727% ( 2) 00:09:09.247 18350.080 - 18450.905: 99.2784% ( 1) 00:09:09.247 18551.729 - 18652.554: 99.3125% ( 6) 00:09:09.247 18652.554 - 18753.378: 99.3750% ( 11) 00:09:09.247 18753.378 - 18854.203: 99.4375% ( 11) 00:09:09.247 18854.203 - 18955.028: 99.4943% ( 10) 00:09:09.248 18955.028 - 19055.852: 99.5227% ( 5) 00:09:09.248 19055.852 - 19156.677: 99.5398% ( 3) 00:09:09.248 19156.677 - 19257.502: 99.5682% ( 5) 00:09:09.248 19257.502 - 19358.326: 99.5909% ( 4) 00:09:09.248 19358.326 - 19459.151: 99.6136% ( 4) 00:09:09.248 19459.151 - 19559.975: 99.6364% ( 4) 00:09:09.248 24097.083 - 24197.908: 99.6420% ( 1) 00:09:09.248 24197.908 - 24298.732: 99.6705% ( 5) 00:09:09.248 24298.732 - 24399.557: 99.6932% ( 4) 00:09:09.248 24399.557 - 24500.382: 99.7102% ( 3) 00:09:09.248 24500.382 - 24601.206: 99.7273% ( 3) 00:09:09.248 24601.206 - 24702.031: 99.7330% ( 1) 00:09:09.248 24702.031 - 24802.855: 99.7443% ( 2) 00:09:09.248 24802.855 - 24903.680: 99.7614% ( 3) 00:09:09.248 24903.680 - 25004.505: 99.7784% ( 3) 00:09:09.248 25004.505 - 25105.329: 99.7955% ( 3) 00:09:09.248 25105.329 - 25206.154: 99.8125% ( 3) 00:09:09.248 25206.154 - 25306.978: 99.8295% ( 3) 00:09:09.248 25306.978 - 25407.803: 99.8466% ( 3) 00:09:09.248 25407.803 - 25508.628: 99.8580% ( 2) 00:09:09.248 25508.628 - 25609.452: 99.8750% ( 3) 00:09:09.248 25609.452 - 25710.277: 99.8920% ( 3) 00:09:09.248 25710.277 - 25811.102: 99.9148% ( 4) 00:09:09.248 25811.102 - 26012.751: 99.9659% ( 9) 00:09:09.248 26012.751 - 26214.400: 100.0000% ( 6) 00:09:09.248 00:09:09.248 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:09.248 ============================================================================== 00:09:09.248 Range in us Cumulative IO count 00:09:09.248 4612.726 - 4637.932: 0.0057% ( 1) 00:09:09.248 4637.932 - 4663.138: 0.0114% ( 1) 00:09:09.248 4663.138 - 4688.345: 0.0284% ( 3) 00:09:09.248 4688.345 - 4713.551: 0.0398% ( 2) 00:09:09.248 4713.551 - 4738.757: 0.0625% ( 4) 00:09:09.248 4738.757 - 4763.963: 0.1364% ( 13) 00:09:09.248 4763.963 - 4789.169: 0.2045% ( 12) 00:09:09.248 4789.169 - 4814.375: 0.2386% ( 6) 00:09:09.248 4814.375 - 4839.582: 0.2784% ( 7) 00:09:09.248 4839.582 - 4864.788: 0.2841% ( 1) 00:09:09.248 4864.788 - 4889.994: 0.2898% ( 1) 00:09:09.248 4889.994 - 4915.200: 0.3011% ( 2) 00:09:09.248 4915.200 - 4940.406: 0.3125% ( 2) 00:09:09.248 4940.406 - 4965.612: 0.3239% ( 2) 00:09:09.248 4965.612 - 4990.818: 0.3352% ( 2) 00:09:09.248 4990.818 - 5016.025: 0.3409% ( 1) 00:09:09.248 5016.025 - 5041.231: 0.3523% ( 2) 00:09:09.248 5041.231 - 5066.437: 0.3636% ( 2) 00:09:09.248 6099.889 - 6125.095: 0.3693% ( 1) 00:09:09.248 6125.095 - 6150.302: 0.3750% ( 1) 00:09:09.248 6150.302 - 6175.508: 0.3920% ( 3) 00:09:09.248 6225.920 - 6251.126: 0.4205% ( 5) 00:09:09.248 6251.126 - 6276.332: 0.4716% ( 9) 00:09:09.248 6276.332 - 6301.538: 0.5568% ( 15) 00:09:09.248 6301.538 - 6326.745: 0.6932% ( 24) 00:09:09.248 6326.745 - 6351.951: 0.8807% ( 33) 00:09:09.248 6351.951 - 6377.157: 1.1591% ( 49) 00:09:09.248 6377.157 - 6402.363: 1.4773% ( 56) 00:09:09.248 6402.363 - 6427.569: 1.7841% ( 54) 00:09:09.248 6427.569 - 6452.775: 2.2102% ( 75) 00:09:09.248 6452.775 - 6503.188: 3.6932% ( 261) 00:09:09.248 6503.188 - 6553.600: 5.5966% ( 335) 00:09:09.248 6553.600 - 6604.012: 8.9318% ( 587) 00:09:09.248 6604.012 - 6654.425: 13.1477% ( 742) 00:09:09.248 6654.425 - 6704.837: 18.8239% ( 999) 00:09:09.248 6704.837 - 6755.249: 25.9716% ( 1258) 00:09:09.248 6755.249 - 6805.662: 34.5000% ( 1501) 00:09:09.248 6805.662 - 6856.074: 43.0739% ( 1509) 00:09:09.248 6856.074 - 6906.486: 51.0909% ( 1411) 00:09:09.248 6906.486 - 6956.898: 57.3580% ( 1103) 00:09:09.248 6956.898 - 7007.311: 62.0114% ( 819) 00:09:09.248 7007.311 - 7057.723: 65.8182% ( 670) 00:09:09.248 7057.723 - 7108.135: 69.3182% ( 616) 00:09:09.248 7108.135 - 7158.548: 72.2273% ( 512) 00:09:09.248 7158.548 - 7208.960: 75.0739% ( 501) 00:09:09.248 7208.960 - 7259.372: 76.5852% ( 266) 00:09:09.248 7259.372 - 7309.785: 78.4148% ( 322) 00:09:09.248 7309.785 - 7360.197: 80.2216% ( 318) 00:09:09.248 7360.197 - 7410.609: 81.3636% ( 201) 00:09:09.248 7410.609 - 7461.022: 82.4318% ( 188) 00:09:09.248 7461.022 - 7511.434: 83.3523% ( 162) 00:09:09.248 7511.434 - 7561.846: 84.1761% ( 145) 00:09:09.248 7561.846 - 7612.258: 84.9716% ( 140) 00:09:09.248 7612.258 - 7662.671: 85.5739% ( 106) 00:09:09.248 7662.671 - 7713.083: 86.3295% ( 133) 00:09:09.248 7713.083 - 7763.495: 86.9432% ( 108) 00:09:09.248 7763.495 - 7813.908: 87.5341% ( 104) 00:09:09.248 7813.908 - 7864.320: 87.9886% ( 80) 00:09:09.248 7864.320 - 7914.732: 88.4830% ( 87) 00:09:09.248 7914.732 - 7965.145: 89.1250% ( 113) 00:09:09.248 7965.145 - 8015.557: 89.6477% ( 92) 00:09:09.248 8015.557 - 8065.969: 90.4375% ( 139) 00:09:09.248 8065.969 - 8116.382: 91.0682% ( 111) 00:09:09.248 8116.382 - 8166.794: 91.8977% ( 146) 00:09:09.248 8166.794 - 8217.206: 92.1932% ( 52) 00:09:09.248 8217.206 - 8267.618: 92.4375% ( 43) 00:09:09.248 8267.618 - 8318.031: 92.6591% ( 39) 00:09:09.248 8318.031 - 8368.443: 92.8920% ( 41) 00:09:09.248 8368.443 - 8418.855: 93.1818% ( 51) 00:09:09.248 8418.855 - 8469.268: 93.3807% ( 35) 00:09:09.248 8469.268 - 8519.680: 93.6364% ( 45) 00:09:09.248 8519.680 - 8570.092: 93.7557% ( 21) 00:09:09.248 8570.092 - 8620.505: 93.8636% ( 19) 00:09:09.248 8620.505 - 8670.917: 93.9659% ( 18) 00:09:09.248 8670.917 - 8721.329: 94.1307% ( 29) 00:09:09.248 8721.329 - 8771.742: 94.2784% ( 26) 00:09:09.248 8771.742 - 8822.154: 94.3807% ( 18) 00:09:09.248 8822.154 - 8872.566: 94.4545% ( 13) 00:09:09.248 8872.566 - 8922.978: 94.5057% ( 9) 00:09:09.248 8922.978 - 8973.391: 94.5682% ( 11) 00:09:09.248 8973.391 - 9023.803: 94.6477% ( 14) 00:09:09.248 9023.803 - 9074.215: 94.9034% ( 45) 00:09:09.248 9074.215 - 9124.628: 95.0341% ( 23) 00:09:09.248 9124.628 - 9175.040: 95.2500% ( 38) 00:09:09.248 9175.040 - 9225.452: 95.4091% ( 28) 00:09:09.248 9225.452 - 9275.865: 95.4886% ( 14) 00:09:09.248 9275.865 - 9326.277: 95.5852% ( 17) 00:09:09.248 9326.277 - 9376.689: 95.6761% ( 16) 00:09:09.248 9376.689 - 9427.102: 95.8068% ( 23) 00:09:09.248 9427.102 - 9477.514: 96.0000% ( 34) 00:09:09.248 9477.514 - 9527.926: 96.1477% ( 26) 00:09:09.248 9527.926 - 9578.338: 96.3068% ( 28) 00:09:09.248 9578.338 - 9628.751: 96.3636% ( 10) 00:09:09.248 9628.751 - 9679.163: 96.4205% ( 10) 00:09:09.248 9679.163 - 9729.575: 96.4943% ( 13) 00:09:09.248 9729.575 - 9779.988: 96.5568% ( 11) 00:09:09.248 9779.988 - 9830.400: 96.6534% ( 17) 00:09:09.248 9830.400 - 9880.812: 96.7614% ( 19) 00:09:09.248 9880.812 - 9931.225: 96.8807% ( 21) 00:09:09.248 9931.225 - 9981.637: 96.9943% ( 20) 00:09:09.248 9981.637 - 10032.049: 97.0795% ( 15) 00:09:09.248 10032.049 - 10082.462: 97.1648% ( 15) 00:09:09.248 10082.462 - 10132.874: 97.2500% ( 15) 00:09:09.248 10132.874 - 10183.286: 97.3523% ( 18) 00:09:09.248 10183.286 - 10233.698: 97.4659% ( 20) 00:09:09.248 10233.698 - 10284.111: 97.5341% ( 12) 00:09:09.248 10284.111 - 10334.523: 97.6136% ( 14) 00:09:09.248 10334.523 - 10384.935: 97.6875% ( 13) 00:09:09.248 10384.935 - 10435.348: 97.7159% ( 5) 00:09:09.248 10435.348 - 10485.760: 97.7273% ( 2) 00:09:09.249 10485.760 - 10536.172: 97.7386% ( 2) 00:09:09.249 10536.172 - 10586.585: 97.7443% ( 1) 00:09:09.249 10586.585 - 10636.997: 97.7500% ( 1) 00:09:09.249 10636.997 - 10687.409: 97.7557% ( 1) 00:09:09.249 10687.409 - 10737.822: 97.7670% ( 2) 00:09:09.249 10737.822 - 10788.234: 97.7727% ( 1) 00:09:09.249 10788.234 - 10838.646: 97.7841% ( 2) 00:09:09.249 10838.646 - 10889.058: 97.7898% ( 1) 00:09:09.249 10889.058 - 10939.471: 97.8011% ( 2) 00:09:09.249 10939.471 - 10989.883: 97.8125% ( 2) 00:09:09.249 10989.883 - 11040.295: 97.8182% ( 1) 00:09:09.249 11090.708 - 11141.120: 97.8239% ( 1) 00:09:09.249 11141.120 - 11191.532: 97.8295% ( 1) 00:09:09.249 11494.006 - 11544.418: 97.8352% ( 1) 00:09:09.249 11544.418 - 11594.831: 97.8636% ( 5) 00:09:09.249 11594.831 - 11645.243: 97.8977% ( 6) 00:09:09.249 11645.243 - 11695.655: 97.9375% ( 7) 00:09:09.249 11695.655 - 11746.068: 97.9773% ( 7) 00:09:09.249 11746.068 - 11796.480: 98.0455% ( 12) 00:09:09.249 11796.480 - 11846.892: 98.3523% ( 54) 00:09:09.249 11846.892 - 11897.305: 98.3977% ( 8) 00:09:09.249 11897.305 - 11947.717: 98.4148% ( 3) 00:09:09.249 11947.717 - 11998.129: 98.4375% ( 4) 00:09:09.249 11998.129 - 12048.542: 98.4432% ( 1) 00:09:09.249 12048.542 - 12098.954: 98.4545% ( 2) 00:09:09.249 12098.954 - 12149.366: 98.4602% ( 1) 00:09:09.249 12149.366 - 12199.778: 98.4716% ( 2) 00:09:09.249 12199.778 - 12250.191: 98.4943% ( 4) 00:09:09.249 12250.191 - 12300.603: 98.5227% ( 5) 00:09:09.249 12300.603 - 12351.015: 98.5625% ( 7) 00:09:09.249 12351.015 - 12401.428: 98.5909% ( 5) 00:09:09.249 12401.428 - 12451.840: 98.6250% ( 6) 00:09:09.249 12451.840 - 12502.252: 98.6477% ( 4) 00:09:09.249 12502.252 - 12552.665: 98.6705% ( 4) 00:09:09.249 12552.665 - 12603.077: 98.6818% ( 2) 00:09:09.249 12603.077 - 12653.489: 98.7330% ( 9) 00:09:09.249 12653.489 - 12703.902: 98.8125% ( 14) 00:09:09.249 12703.902 - 12754.314: 98.8466% ( 6) 00:09:09.249 12754.314 - 12804.726: 98.8580% ( 2) 00:09:09.249 12804.726 - 12855.138: 98.8636% ( 1) 00:09:09.249 12855.138 - 12905.551: 98.8750% ( 2) 00:09:09.249 12905.551 - 13006.375: 98.8920% ( 3) 00:09:09.249 13006.375 - 13107.200: 98.9318% ( 7) 00:09:09.249 13107.200 - 13208.025: 98.9943% ( 11) 00:09:09.249 13208.025 - 13308.849: 99.1477% ( 27) 00:09:09.249 13308.849 - 13409.674: 99.2273% ( 14) 00:09:09.249 13409.674 - 13510.498: 99.2670% ( 7) 00:09:09.249 13510.498 - 13611.323: 99.2727% ( 1) 00:09:09.249 17946.782 - 18047.606: 99.2955% ( 4) 00:09:09.249 18047.606 - 18148.431: 99.3125% ( 3) 00:09:09.249 18148.431 - 18249.255: 99.3409% ( 5) 00:09:09.249 18249.255 - 18350.080: 99.3636% ( 4) 00:09:09.249 18350.080 - 18450.905: 99.3807% ( 3) 00:09:09.249 18450.905 - 18551.729: 99.3977% ( 3) 00:09:09.249 18551.729 - 18652.554: 99.4205% ( 4) 00:09:09.249 18652.554 - 18753.378: 99.4489% ( 5) 00:09:09.249 18753.378 - 18854.203: 99.4830% ( 6) 00:09:09.249 18854.203 - 18955.028: 99.5057% ( 4) 00:09:09.249 18955.028 - 19055.852: 99.5341% ( 5) 00:09:09.249 19055.852 - 19156.677: 99.5511% ( 3) 00:09:09.249 19156.677 - 19257.502: 99.5739% ( 4) 00:09:09.249 19257.502 - 19358.326: 99.5966% ( 4) 00:09:09.249 19358.326 - 19459.151: 99.6193% ( 4) 00:09:09.249 19459.151 - 19559.975: 99.6364% ( 3) 00:09:09.249 23794.609 - 23895.434: 99.6420% ( 1) 00:09:09.249 24097.083 - 24197.908: 99.6534% ( 2) 00:09:09.249 24197.908 - 24298.732: 99.6591% ( 1) 00:09:09.249 24298.732 - 24399.557: 99.6648% ( 1) 00:09:09.249 24500.382 - 24601.206: 99.6761% ( 2) 00:09:09.249 24601.206 - 24702.031: 99.7102% ( 6) 00:09:09.249 24702.031 - 24802.855: 99.7386% ( 5) 00:09:09.249 24802.855 - 24903.680: 99.7727% ( 6) 00:09:09.249 24903.680 - 25004.505: 99.8068% ( 6) 00:09:09.249 25004.505 - 25105.329: 99.8295% ( 4) 00:09:09.249 25105.329 - 25206.154: 99.8523% ( 4) 00:09:09.249 25206.154 - 25306.978: 99.8693% ( 3) 00:09:09.249 25306.978 - 25407.803: 99.8920% ( 4) 00:09:09.249 25407.803 - 25508.628: 99.9148% ( 4) 00:09:09.249 25508.628 - 25609.452: 99.9318% ( 3) 00:09:09.249 25609.452 - 25710.277: 99.9545% ( 4) 00:09:09.249 25710.277 - 25811.102: 99.9716% ( 3) 00:09:09.249 25811.102 - 26012.751: 100.0000% ( 5) 00:09:09.249 00:09:09.249 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:09.249 ============================================================================== 00:09:09.249 Range in us Cumulative IO count 00:09:09.249 4310.252 - 4335.458: 0.0057% ( 1) 00:09:09.249 4335.458 - 4360.665: 0.0170% ( 2) 00:09:09.249 4360.665 - 4385.871: 0.0227% ( 1) 00:09:09.249 4385.871 - 4411.077: 0.0341% ( 2) 00:09:09.249 4411.077 - 4436.283: 0.0795% ( 8) 00:09:09.249 4436.283 - 4461.489: 0.1477% ( 12) 00:09:09.249 4461.489 - 4486.695: 0.1989% ( 9) 00:09:09.249 4486.695 - 4511.902: 0.2443% ( 8) 00:09:09.249 4511.902 - 4537.108: 0.2727% ( 5) 00:09:09.249 4537.108 - 4562.314: 0.2841% ( 2) 00:09:09.249 4562.314 - 4587.520: 0.2955% ( 2) 00:09:09.249 4587.520 - 4612.726: 0.3068% ( 2) 00:09:09.249 4612.726 - 4637.932: 0.3182% ( 2) 00:09:09.249 4637.932 - 4663.138: 0.3295% ( 2) 00:09:09.249 4663.138 - 4688.345: 0.3409% ( 2) 00:09:09.249 4688.345 - 4713.551: 0.3523% ( 2) 00:09:09.249 4713.551 - 4738.757: 0.3636% ( 2) 00:09:09.249 5847.828 - 5873.034: 0.3693% ( 1) 00:09:09.249 5923.446 - 5948.652: 0.3750% ( 1) 00:09:09.249 5948.652 - 5973.858: 0.3864% ( 2) 00:09:09.249 5973.858 - 5999.065: 0.3977% ( 2) 00:09:09.249 5999.065 - 6024.271: 0.4148% ( 3) 00:09:09.249 6024.271 - 6049.477: 0.4489% ( 6) 00:09:09.249 6049.477 - 6074.683: 0.5057% ( 10) 00:09:09.249 6074.683 - 6099.889: 0.5795% ( 13) 00:09:09.249 6099.889 - 6125.095: 0.6591% ( 14) 00:09:09.249 6125.095 - 6150.302: 0.6705% ( 2) 00:09:09.249 6150.302 - 6175.508: 0.6818% ( 2) 00:09:09.249 6175.508 - 6200.714: 0.6989% ( 3) 00:09:09.249 6200.714 - 6225.920: 0.7102% ( 2) 00:09:09.249 6225.920 - 6251.126: 0.7386% ( 5) 00:09:09.249 6251.126 - 6276.332: 0.7898% ( 9) 00:09:09.249 6276.332 - 6301.538: 0.8352% ( 8) 00:09:09.249 6301.538 - 6326.745: 0.9261% ( 16) 00:09:09.249 6326.745 - 6351.951: 1.0341% ( 19) 00:09:09.249 6351.951 - 6377.157: 1.2273% ( 34) 00:09:09.249 6377.157 - 6402.363: 1.4318% ( 36) 00:09:09.249 6402.363 - 6427.569: 1.7330% ( 53) 00:09:09.249 6427.569 - 6452.775: 2.1364% ( 71) 00:09:09.249 6452.775 - 6503.188: 3.4943% ( 239) 00:09:09.249 6503.188 - 6553.600: 5.4261% ( 340) 00:09:09.249 6553.600 - 6604.012: 8.7330% ( 582) 00:09:09.249 6604.012 - 6654.425: 12.5284% ( 668) 00:09:09.249 6654.425 - 6704.837: 18.4261% ( 1038) 00:09:09.249 6704.837 - 6755.249: 25.1534% ( 1184) 00:09:09.249 6755.249 - 6805.662: 33.6875% ( 1502) 00:09:09.249 6805.662 - 6856.074: 43.5227% ( 1731) 00:09:09.249 6856.074 - 6906.486: 51.5625% ( 1415) 00:09:09.249 6906.486 - 6956.898: 57.4602% ( 1038) 00:09:09.249 6956.898 - 7007.311: 61.9943% ( 798) 00:09:09.249 7007.311 - 7057.723: 66.4659% ( 787) 00:09:09.249 7057.723 - 7108.135: 70.0170% ( 625) 00:09:09.249 7108.135 - 7158.548: 73.0739% ( 538) 00:09:09.249 7158.548 - 7208.960: 75.0682% ( 351) 00:09:09.249 7208.960 - 7259.372: 76.6932% ( 286) 00:09:09.249 7259.372 - 7309.785: 78.5455% ( 326) 00:09:09.249 7309.785 - 7360.197: 80.1932% ( 290) 00:09:09.249 7360.197 - 7410.609: 81.3409% ( 202) 00:09:09.249 7410.609 - 7461.022: 82.4545% ( 196) 00:09:09.249 7461.022 - 7511.434: 83.6591% ( 212) 00:09:09.249 7511.434 - 7561.846: 84.4205% ( 134) 00:09:09.249 7561.846 - 7612.258: 84.9602% ( 95) 00:09:09.249 7612.258 - 7662.671: 85.6080% ( 114) 00:09:09.249 7662.671 - 7713.083: 86.4545% ( 149) 00:09:09.249 7713.083 - 7763.495: 87.0966% ( 113) 00:09:09.249 7763.495 - 7813.908: 87.5682% ( 83) 00:09:09.249 7813.908 - 7864.320: 87.9943% ( 75) 00:09:09.249 7864.320 - 7914.732: 88.6648% ( 118) 00:09:09.249 7914.732 - 7965.145: 89.1648% ( 88) 00:09:09.250 7965.145 - 8015.557: 89.6648% ( 88) 00:09:09.250 8015.557 - 8065.969: 90.1250% ( 81) 00:09:09.250 8065.969 - 8116.382: 90.6932% ( 100) 00:09:09.250 8116.382 - 8166.794: 91.3750% ( 120) 00:09:09.250 8166.794 - 8217.206: 91.7386% ( 64) 00:09:09.250 8217.206 - 8267.618: 92.0227% ( 50) 00:09:09.250 8267.618 - 8318.031: 92.3182% ( 52) 00:09:09.250 8318.031 - 8368.443: 92.7500% ( 76) 00:09:09.250 8368.443 - 8418.855: 93.1307% ( 67) 00:09:09.250 8418.855 - 8469.268: 93.2045% ( 13) 00:09:09.250 8469.268 - 8519.680: 93.2898% ( 15) 00:09:09.250 8519.680 - 8570.092: 93.4148% ( 22) 00:09:09.250 8570.092 - 8620.505: 93.5568% ( 25) 00:09:09.250 8620.505 - 8670.917: 93.7330% ( 31) 00:09:09.250 8670.917 - 8721.329: 93.8693% ( 24) 00:09:09.250 8721.329 - 8771.742: 93.9659% ( 17) 00:09:09.250 8771.742 - 8822.154: 94.0682% ( 18) 00:09:09.250 8822.154 - 8872.566: 94.2670% ( 35) 00:09:09.250 8872.566 - 8922.978: 94.3807% ( 20) 00:09:09.250 8922.978 - 8973.391: 94.6193% ( 42) 00:09:09.250 8973.391 - 9023.803: 94.7273% ( 19) 00:09:09.250 9023.803 - 9074.215: 94.8239% ( 17) 00:09:09.250 9074.215 - 9124.628: 95.1818% ( 63) 00:09:09.250 9124.628 - 9175.040: 95.3693% ( 33) 00:09:09.250 9175.040 - 9225.452: 95.4943% ( 22) 00:09:09.250 9225.452 - 9275.865: 95.6307% ( 24) 00:09:09.250 9275.865 - 9326.277: 95.7727% ( 25) 00:09:09.250 9326.277 - 9376.689: 95.8466% ( 13) 00:09:09.250 9376.689 - 9427.102: 95.9602% ( 20) 00:09:09.250 9427.102 - 9477.514: 96.1023% ( 25) 00:09:09.250 9477.514 - 9527.926: 96.3466% ( 43) 00:09:09.250 9527.926 - 9578.338: 96.5795% ( 41) 00:09:09.250 9578.338 - 9628.751: 96.6250% ( 8) 00:09:09.250 9628.751 - 9679.163: 96.7443% ( 21) 00:09:09.250 9679.163 - 9729.575: 96.8466% ( 18) 00:09:09.250 9729.575 - 9779.988: 96.9602% ( 20) 00:09:09.250 9779.988 - 9830.400: 97.0170% ( 10) 00:09:09.250 9830.400 - 9880.812: 97.0852% ( 12) 00:09:09.250 9880.812 - 9931.225: 97.1420% ( 10) 00:09:09.250 9931.225 - 9981.637: 97.1875% ( 8) 00:09:09.250 9981.637 - 10032.049: 97.2500% ( 11) 00:09:09.250 10032.049 - 10082.462: 97.2955% ( 8) 00:09:09.250 10082.462 - 10132.874: 97.3466% ( 9) 00:09:09.250 10132.874 - 10183.286: 97.3977% ( 9) 00:09:09.250 10183.286 - 10233.698: 97.4375% ( 7) 00:09:09.250 10233.698 - 10284.111: 97.4602% ( 4) 00:09:09.250 10284.111 - 10334.523: 97.5057% ( 8) 00:09:09.250 10334.523 - 10384.935: 97.5341% ( 5) 00:09:09.250 10384.935 - 10435.348: 97.5682% ( 6) 00:09:09.250 10435.348 - 10485.760: 97.6080% ( 7) 00:09:09.250 10485.760 - 10536.172: 97.6420% ( 6) 00:09:09.250 10536.172 - 10586.585: 97.6989% ( 10) 00:09:09.250 10586.585 - 10636.997: 97.7443% ( 8) 00:09:09.250 10636.997 - 10687.409: 97.7898% ( 8) 00:09:09.250 10687.409 - 10737.822: 97.8068% ( 3) 00:09:09.250 10737.822 - 10788.234: 97.8239% ( 3) 00:09:09.250 10788.234 - 10838.646: 97.8466% ( 4) 00:09:09.250 10838.646 - 10889.058: 97.8636% ( 3) 00:09:09.250 10889.058 - 10939.471: 97.8920% ( 5) 00:09:09.250 10939.471 - 10989.883: 97.8977% ( 1) 00:09:09.250 10989.883 - 11040.295: 97.9261% ( 5) 00:09:09.250 11040.295 - 11090.708: 97.9602% ( 6) 00:09:09.250 11090.708 - 11141.120: 97.9830% ( 4) 00:09:09.250 11141.120 - 11191.532: 98.0284% ( 8) 00:09:09.250 11191.532 - 11241.945: 98.0795% ( 9) 00:09:09.250 11241.945 - 11292.357: 98.0909% ( 2) 00:09:09.250 11292.357 - 11342.769: 98.0966% ( 1) 00:09:09.250 11342.769 - 11393.182: 98.1080% ( 2) 00:09:09.250 11393.182 - 11443.594: 98.1193% ( 2) 00:09:09.250 11443.594 - 11494.006: 98.1364% ( 3) 00:09:09.250 11494.006 - 11544.418: 98.1591% ( 4) 00:09:09.250 11544.418 - 11594.831: 98.1818% ( 4) 00:09:09.250 11594.831 - 11645.243: 98.1989% ( 3) 00:09:09.250 11645.243 - 11695.655: 98.2273% ( 5) 00:09:09.250 11695.655 - 11746.068: 98.2386% ( 2) 00:09:09.250 11746.068 - 11796.480: 98.2614% ( 4) 00:09:09.250 11796.480 - 11846.892: 98.4716% ( 37) 00:09:09.250 11846.892 - 11897.305: 98.4886% ( 3) 00:09:09.250 11897.305 - 11947.717: 98.4943% ( 1) 00:09:09.250 11947.717 - 11998.129: 98.5057% ( 2) 00:09:09.250 11998.129 - 12048.542: 98.5114% ( 1) 00:09:09.250 12048.542 - 12098.954: 98.5227% ( 2) 00:09:09.250 12098.954 - 12149.366: 98.5341% ( 2) 00:09:09.250 12149.366 - 12199.778: 98.5398% ( 1) 00:09:09.250 12199.778 - 12250.191: 98.5511% ( 2) 00:09:09.250 12250.191 - 12300.603: 98.5568% ( 1) 00:09:09.250 12351.015 - 12401.428: 98.5625% ( 1) 00:09:09.250 12401.428 - 12451.840: 98.5909% ( 5) 00:09:09.250 12451.840 - 12502.252: 98.6250% ( 6) 00:09:09.250 12502.252 - 12552.665: 98.6420% ( 3) 00:09:09.250 12552.665 - 12603.077: 98.6705% ( 5) 00:09:09.250 12603.077 - 12653.489: 98.6989% ( 5) 00:09:09.250 12653.489 - 12703.902: 98.7443% ( 8) 00:09:09.250 12703.902 - 12754.314: 98.7784% ( 6) 00:09:09.250 12754.314 - 12804.726: 98.7898% ( 2) 00:09:09.250 12804.726 - 12855.138: 98.7955% ( 1) 00:09:09.250 12855.138 - 12905.551: 98.8068% ( 2) 00:09:09.250 12905.551 - 13006.375: 98.8239% ( 3) 00:09:09.250 13006.375 - 13107.200: 98.8864% ( 11) 00:09:09.250 13107.200 - 13208.025: 98.9545% ( 12) 00:09:09.250 13208.025 - 13308.849: 99.1989% ( 43) 00:09:09.250 13308.849 - 13409.674: 99.2557% ( 10) 00:09:09.250 13409.674 - 13510.498: 99.2670% ( 2) 00:09:09.250 13510.498 - 13611.323: 99.2727% ( 1) 00:09:09.250 17946.782 - 18047.606: 99.3011% ( 5) 00:09:09.250 18047.606 - 18148.431: 99.3466% ( 8) 00:09:09.250 18148.431 - 18249.255: 99.3977% ( 9) 00:09:09.250 18249.255 - 18350.080: 99.4489% ( 9) 00:09:09.250 18350.080 - 18450.905: 99.4830% ( 6) 00:09:09.250 18450.905 - 18551.729: 99.5170% ( 6) 00:09:09.250 18551.729 - 18652.554: 99.5398% ( 4) 00:09:09.250 18652.554 - 18753.378: 99.5568% ( 3) 00:09:09.250 18753.378 - 18854.203: 99.5739% ( 3) 00:09:09.250 18854.203 - 18955.028: 99.5852% ( 2) 00:09:09.250 18955.028 - 19055.852: 99.6023% ( 3) 00:09:09.250 19055.852 - 19156.677: 99.6250% ( 4) 00:09:09.250 19156.677 - 19257.502: 99.6364% ( 2) 00:09:09.250 23592.960 - 23693.785: 99.6477% ( 2) 00:09:09.250 23693.785 - 23794.609: 99.6648% ( 3) 00:09:09.250 23794.609 - 23895.434: 99.6818% ( 3) 00:09:09.250 23895.434 - 23996.258: 99.7045% ( 4) 00:09:09.250 23996.258 - 24097.083: 99.7273% ( 4) 00:09:09.250 24097.083 - 24197.908: 99.7330% ( 1) 00:09:09.250 24197.908 - 24298.732: 99.7386% ( 1) 00:09:09.250 24298.732 - 24399.557: 99.7557% ( 3) 00:09:09.250 24399.557 - 24500.382: 99.7727% ( 3) 00:09:09.250 24500.382 - 24601.206: 99.7898% ( 3) 00:09:09.250 24601.206 - 24702.031: 99.8011% ( 2) 00:09:09.250 24702.031 - 24802.855: 99.8182% ( 3) 00:09:09.250 24802.855 - 24903.680: 99.8409% ( 4) 00:09:09.250 24903.680 - 25004.505: 99.8580% ( 3) 00:09:09.250 25004.505 - 25105.329: 99.8750% ( 3) 00:09:09.250 25105.329 - 25206.154: 99.8920% ( 3) 00:09:09.251 25206.154 - 25306.978: 99.9205% ( 5) 00:09:09.251 25306.978 - 25407.803: 99.9432% ( 4) 00:09:09.251 25407.803 - 25508.628: 99.9716% ( 5) 00:09:09.251 25508.628 - 25609.452: 99.9886% ( 3) 00:09:09.251 25609.452 - 25710.277: 100.0000% ( 2) 00:09:09.251 00:09:09.251 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:09.251 ============================================================================== 00:09:09.251 Range in us Cumulative IO count 00:09:09.251 4108.603 - 4133.809: 0.0284% ( 5) 00:09:09.251 4133.809 - 4159.015: 0.0625% ( 6) 00:09:09.251 4159.015 - 4184.222: 0.1023% ( 7) 00:09:09.251 4184.222 - 4209.428: 0.1477% ( 8) 00:09:09.251 4209.428 - 4234.634: 0.1932% ( 8) 00:09:09.251 4234.634 - 4259.840: 0.2330% ( 7) 00:09:09.251 4259.840 - 4285.046: 0.2784% ( 8) 00:09:09.251 4285.046 - 4310.252: 0.2841% ( 1) 00:09:09.251 4310.252 - 4335.458: 0.3011% ( 3) 00:09:09.251 4335.458 - 4360.665: 0.3068% ( 1) 00:09:09.251 4360.665 - 4385.871: 0.3182% ( 2) 00:09:09.251 4385.871 - 4411.077: 0.3295% ( 2) 00:09:09.251 4411.077 - 4436.283: 0.3409% ( 2) 00:09:09.251 4436.283 - 4461.489: 0.3523% ( 2) 00:09:09.251 4461.489 - 4486.695: 0.3636% ( 2) 00:09:09.251 5646.178 - 5671.385: 0.3693% ( 1) 00:09:09.251 5671.385 - 5696.591: 0.3864% ( 3) 00:09:09.251 5696.591 - 5721.797: 0.3977% ( 2) 00:09:09.251 5721.797 - 5747.003: 0.4432% ( 8) 00:09:09.251 5747.003 - 5772.209: 0.4943% ( 9) 00:09:09.251 5772.209 - 5797.415: 0.5568% ( 11) 00:09:09.251 5797.415 - 5822.622: 0.6193% ( 11) 00:09:09.251 5822.622 - 5847.828: 0.6364% ( 3) 00:09:09.251 5847.828 - 5873.034: 0.6477% ( 2) 00:09:09.251 5873.034 - 5898.240: 0.6591% ( 2) 00:09:09.251 5898.240 - 5923.446: 0.6705% ( 2) 00:09:09.251 5923.446 - 5948.652: 0.6818% ( 2) 00:09:09.251 5948.652 - 5973.858: 0.6932% ( 2) 00:09:09.251 5973.858 - 5999.065: 0.7045% ( 2) 00:09:09.251 5999.065 - 6024.271: 0.7159% ( 2) 00:09:09.251 6024.271 - 6049.477: 0.7273% ( 2) 00:09:09.251 6049.477 - 6074.683: 0.7330% ( 1) 00:09:09.251 6099.889 - 6125.095: 0.7386% ( 1) 00:09:09.251 6125.095 - 6150.302: 0.7557% ( 3) 00:09:09.251 6150.302 - 6175.508: 0.7784% ( 4) 00:09:09.251 6175.508 - 6200.714: 0.7841% ( 1) 00:09:09.251 6200.714 - 6225.920: 0.7898% ( 1) 00:09:09.251 6225.920 - 6251.126: 0.8011% ( 2) 00:09:09.251 6251.126 - 6276.332: 0.8295% ( 5) 00:09:09.251 6276.332 - 6301.538: 0.9034% ( 13) 00:09:09.251 6301.538 - 6326.745: 0.9773% ( 13) 00:09:09.251 6326.745 - 6351.951: 1.1250% ( 26) 00:09:09.251 6351.951 - 6377.157: 1.2614% ( 24) 00:09:09.251 6377.157 - 6402.363: 1.4489% ( 33) 00:09:09.251 6402.363 - 6427.569: 1.7273% ( 49) 00:09:09.251 6427.569 - 6452.775: 2.1648% ( 77) 00:09:09.251 6452.775 - 6503.188: 3.5227% ( 239) 00:09:09.251 6503.188 - 6553.600: 5.3693% ( 325) 00:09:09.251 6553.600 - 6604.012: 8.2216% ( 502) 00:09:09.251 6604.012 - 6654.425: 12.5795% ( 767) 00:09:09.251 6654.425 - 6704.837: 18.7841% ( 1092) 00:09:09.251 6704.837 - 6755.249: 25.5341% ( 1188) 00:09:09.251 6755.249 - 6805.662: 34.1420% ( 1515) 00:09:09.251 6805.662 - 6856.074: 43.2727% ( 1607) 00:09:09.251 6856.074 - 6906.486: 51.6136% ( 1468) 00:09:09.251 6906.486 - 6956.898: 58.0398% ( 1131) 00:09:09.251 6956.898 - 7007.311: 62.3750% ( 763) 00:09:09.251 7007.311 - 7057.723: 66.3011% ( 691) 00:09:09.251 7057.723 - 7108.135: 69.6307% ( 586) 00:09:09.251 7108.135 - 7158.548: 72.3352% ( 476) 00:09:09.251 7158.548 - 7208.960: 74.3011% ( 346) 00:09:09.251 7208.960 - 7259.372: 76.3977% ( 369) 00:09:09.251 7259.372 - 7309.785: 78.2557% ( 327) 00:09:09.251 7309.785 - 7360.197: 80.2500% ( 351) 00:09:09.251 7360.197 - 7410.609: 81.4602% ( 213) 00:09:09.251 7410.609 - 7461.022: 82.4489% ( 174) 00:09:09.251 7461.022 - 7511.434: 83.5341% ( 191) 00:09:09.251 7511.434 - 7561.846: 84.6989% ( 205) 00:09:09.251 7561.846 - 7612.258: 85.3750% ( 119) 00:09:09.251 7612.258 - 7662.671: 85.8068% ( 76) 00:09:09.251 7662.671 - 7713.083: 86.4943% ( 121) 00:09:09.251 7713.083 - 7763.495: 86.8580% ( 64) 00:09:09.251 7763.495 - 7813.908: 87.3977% ( 95) 00:09:09.251 7813.908 - 7864.320: 88.0114% ( 108) 00:09:09.251 7864.320 - 7914.732: 88.5227% ( 90) 00:09:09.251 7914.732 - 7965.145: 88.9261% ( 71) 00:09:09.251 7965.145 - 8015.557: 89.3750% ( 79) 00:09:09.251 8015.557 - 8065.969: 90.0170% ( 113) 00:09:09.251 8065.969 - 8116.382: 90.5966% ( 102) 00:09:09.251 8116.382 - 8166.794: 91.1477% ( 97) 00:09:09.251 8166.794 - 8217.206: 91.6705% ( 92) 00:09:09.251 8217.206 - 8267.618: 91.9318% ( 46) 00:09:09.251 8267.618 - 8318.031: 92.1591% ( 40) 00:09:09.251 8318.031 - 8368.443: 92.3580% ( 35) 00:09:09.251 8368.443 - 8418.855: 92.5455% ( 33) 00:09:09.251 8418.855 - 8469.268: 92.7159% ( 30) 00:09:09.251 8469.268 - 8519.680: 92.9830% ( 47) 00:09:09.251 8519.680 - 8570.092: 93.0739% ( 16) 00:09:09.251 8570.092 - 8620.505: 93.1989% ( 22) 00:09:09.251 8620.505 - 8670.917: 93.5909% ( 69) 00:09:09.251 8670.917 - 8721.329: 93.8409% ( 44) 00:09:09.251 8721.329 - 8771.742: 94.0455% ( 36) 00:09:09.251 8771.742 - 8822.154: 94.1875% ( 25) 00:09:09.251 8822.154 - 8872.566: 94.4489% ( 46) 00:09:09.251 8872.566 - 8922.978: 94.5455% ( 17) 00:09:09.251 8922.978 - 8973.391: 94.7386% ( 34) 00:09:09.251 8973.391 - 9023.803: 94.9148% ( 31) 00:09:09.251 9023.803 - 9074.215: 95.0966% ( 32) 00:09:09.251 9074.215 - 9124.628: 95.4261% ( 58) 00:09:09.251 9124.628 - 9175.040: 95.9034% ( 84) 00:09:09.251 9175.040 - 9225.452: 96.0341% ( 23) 00:09:09.251 9225.452 - 9275.865: 96.1420% ( 19) 00:09:09.251 9275.865 - 9326.277: 96.2614% ( 21) 00:09:09.251 9326.277 - 9376.689: 96.4205% ( 28) 00:09:09.251 9376.689 - 9427.102: 96.6761% ( 45) 00:09:09.251 9427.102 - 9477.514: 96.7955% ( 21) 00:09:09.251 9477.514 - 9527.926: 96.8466% ( 9) 00:09:09.251 9527.926 - 9578.338: 96.9091% ( 11) 00:09:09.251 9578.338 - 9628.751: 96.9545% ( 8) 00:09:09.251 9628.751 - 9679.163: 96.9830% ( 5) 00:09:09.251 9679.163 - 9729.575: 97.0057% ( 4) 00:09:09.251 9729.575 - 9779.988: 97.0341% ( 5) 00:09:09.251 9779.988 - 9830.400: 97.0511% ( 3) 00:09:09.251 9830.400 - 9880.812: 97.0625% ( 2) 00:09:09.251 9880.812 - 9931.225: 97.0966% ( 6) 00:09:09.251 9931.225 - 9981.637: 97.1307% ( 6) 00:09:09.251 9981.637 - 10032.049: 97.1534% ( 4) 00:09:09.251 10032.049 - 10082.462: 97.1761% ( 4) 00:09:09.251 10082.462 - 10132.874: 97.2159% ( 7) 00:09:09.251 10132.874 - 10183.286: 97.2500% ( 6) 00:09:09.251 10183.286 - 10233.698: 97.2784% ( 5) 00:09:09.251 10233.698 - 10284.111: 97.3182% ( 7) 00:09:09.251 10284.111 - 10334.523: 97.3693% ( 9) 00:09:09.251 10334.523 - 10384.935: 97.4205% ( 9) 00:09:09.252 10384.935 - 10435.348: 97.4375% ( 3) 00:09:09.252 10435.348 - 10485.760: 97.4545% ( 3) 00:09:09.252 10485.760 - 10536.172: 97.4886% ( 6) 00:09:09.252 10536.172 - 10586.585: 97.5398% ( 9) 00:09:09.252 10586.585 - 10636.997: 97.5739% ( 6) 00:09:09.252 10636.997 - 10687.409: 97.6307% ( 10) 00:09:09.252 10687.409 - 10737.822: 97.7102% ( 14) 00:09:09.252 10737.822 - 10788.234: 97.7557% ( 8) 00:09:09.252 10788.234 - 10838.646: 97.8182% ( 11) 00:09:09.252 10838.646 - 10889.058: 97.8580% ( 7) 00:09:09.252 10889.058 - 10939.471: 97.8977% ( 7) 00:09:09.252 10939.471 - 10989.883: 97.9205% ( 4) 00:09:09.252 10989.883 - 11040.295: 97.9489% ( 5) 00:09:09.252 11040.295 - 11090.708: 97.9886% ( 7) 00:09:09.252 11090.708 - 11141.120: 98.0455% ( 10) 00:09:09.252 11141.120 - 11191.532: 98.1023% ( 10) 00:09:09.252 11191.532 - 11241.945: 98.1534% ( 9) 00:09:09.252 11241.945 - 11292.357: 98.1705% ( 3) 00:09:09.252 11292.357 - 11342.769: 98.1761% ( 1) 00:09:09.252 11342.769 - 11393.182: 98.1818% ( 1) 00:09:09.252 11443.594 - 11494.006: 98.2102% ( 5) 00:09:09.252 11494.006 - 11544.418: 98.2557% ( 8) 00:09:09.252 11544.418 - 11594.831: 98.3182% ( 11) 00:09:09.252 11594.831 - 11645.243: 98.3750% ( 10) 00:09:09.252 11645.243 - 11695.655: 98.4602% ( 15) 00:09:09.252 11695.655 - 11746.068: 98.4886% ( 5) 00:09:09.252 11746.068 - 11796.480: 98.5114% ( 4) 00:09:09.252 11796.480 - 11846.892: 98.5682% ( 10) 00:09:09.252 11846.892 - 11897.305: 98.6364% ( 12) 00:09:09.252 11897.305 - 11947.717: 98.7159% ( 14) 00:09:09.252 11947.717 - 11998.129: 98.7614% ( 8) 00:09:09.252 11998.129 - 12048.542: 98.7784% ( 3) 00:09:09.252 12048.542 - 12098.954: 98.7898% ( 2) 00:09:09.252 12098.954 - 12149.366: 98.7955% ( 1) 00:09:09.252 12149.366 - 12199.778: 98.8125% ( 3) 00:09:09.252 12199.778 - 12250.191: 98.8239% ( 2) 00:09:09.252 12250.191 - 12300.603: 98.8352% ( 2) 00:09:09.252 12300.603 - 12351.015: 98.8466% ( 2) 00:09:09.252 12351.015 - 12401.428: 98.8580% ( 2) 00:09:09.252 12401.428 - 12451.840: 98.8636% ( 1) 00:09:09.252 12451.840 - 12502.252: 98.8750% ( 2) 00:09:09.252 12502.252 - 12552.665: 98.8807% ( 1) 00:09:09.252 12552.665 - 12603.077: 98.8920% ( 2) 00:09:09.252 12603.077 - 12653.489: 98.8977% ( 1) 00:09:09.252 12653.489 - 12703.902: 98.9091% ( 2) 00:09:09.252 13107.200 - 13208.025: 98.9148% ( 1) 00:09:09.252 13208.025 - 13308.849: 98.9716% ( 10) 00:09:09.252 13308.849 - 13409.674: 99.2045% ( 41) 00:09:09.252 13409.674 - 13510.498: 99.2386% ( 6) 00:09:09.252 13510.498 - 13611.323: 99.2670% ( 5) 00:09:09.252 13611.323 - 13712.148: 99.2727% ( 1) 00:09:09.252 17341.834 - 17442.658: 99.2784% ( 1) 00:09:09.252 17442.658 - 17543.483: 99.3068% ( 5) 00:09:09.252 17543.483 - 17644.308: 99.3352% ( 5) 00:09:09.252 17644.308 - 17745.132: 99.3636% ( 5) 00:09:09.252 17745.132 - 17845.957: 99.3920% ( 5) 00:09:09.252 17845.957 - 17946.782: 99.4148% ( 4) 00:09:09.252 17946.782 - 18047.606: 99.4375% ( 4) 00:09:09.252 18047.606 - 18148.431: 99.4545% ( 3) 00:09:09.252 18148.431 - 18249.255: 99.4716% ( 3) 00:09:09.252 18249.255 - 18350.080: 99.4886% ( 3) 00:09:09.252 18350.080 - 18450.905: 99.5170% ( 5) 00:09:09.252 18450.905 - 18551.729: 99.5341% ( 3) 00:09:09.252 18551.729 - 18652.554: 99.5625% ( 5) 00:09:09.252 18652.554 - 18753.378: 99.5795% ( 3) 00:09:09.252 18753.378 - 18854.203: 99.6023% ( 4) 00:09:09.252 18854.203 - 18955.028: 99.6250% ( 4) 00:09:09.252 18955.028 - 19055.852: 99.6364% ( 2) 00:09:09.252 23290.486 - 23391.311: 99.6420% ( 1) 00:09:09.252 23391.311 - 23492.135: 99.6477% ( 1) 00:09:09.252 23492.135 - 23592.960: 99.6534% ( 1) 00:09:09.252 23592.960 - 23693.785: 99.6648% ( 2) 00:09:09.252 23693.785 - 23794.609: 99.6705% ( 1) 00:09:09.252 23794.609 - 23895.434: 99.6989% ( 5) 00:09:09.252 24097.083 - 24197.908: 99.7386% ( 7) 00:09:09.252 24197.908 - 24298.732: 99.7727% ( 6) 00:09:09.252 24298.732 - 24399.557: 99.8011% ( 5) 00:09:09.252 24399.557 - 24500.382: 99.8466% ( 8) 00:09:09.252 24500.382 - 24601.206: 99.8920% ( 8) 00:09:09.252 24601.206 - 24702.031: 99.9261% ( 6) 00:09:09.252 24702.031 - 24802.855: 99.9489% ( 4) 00:09:09.252 24802.855 - 24903.680: 99.9773% ( 5) 00:09:09.252 24903.680 - 25004.505: 99.9943% ( 3) 00:09:09.252 25004.505 - 25105.329: 100.0000% ( 1) 00:09:09.252 00:09:09.252 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:09.252 ============================================================================== 00:09:09.252 Range in us Cumulative IO count 00:09:09.252 3856.542 - 3881.748: 0.0057% ( 1) 00:09:09.252 3881.748 - 3906.954: 0.0398% ( 6) 00:09:09.252 3906.954 - 3932.160: 0.0795% ( 7) 00:09:09.252 3932.160 - 3957.366: 0.1250% ( 8) 00:09:09.252 3957.366 - 3982.572: 0.1818% ( 10) 00:09:09.252 3982.572 - 4007.778: 0.2443% ( 11) 00:09:09.252 4007.778 - 4032.985: 0.2727% ( 5) 00:09:09.252 4032.985 - 4058.191: 0.2841% ( 2) 00:09:09.252 4058.191 - 4083.397: 0.2955% ( 2) 00:09:09.252 4083.397 - 4108.603: 0.3068% ( 2) 00:09:09.252 4108.603 - 4133.809: 0.3182% ( 2) 00:09:09.252 4133.809 - 4159.015: 0.3295% ( 2) 00:09:09.252 4159.015 - 4184.222: 0.3409% ( 2) 00:09:09.252 4184.222 - 4209.428: 0.3523% ( 2) 00:09:09.252 4209.428 - 4234.634: 0.3636% ( 2) 00:09:09.252 5520.148 - 5545.354: 0.3920% ( 5) 00:09:09.252 5545.354 - 5570.560: 0.4489% ( 10) 00:09:09.252 5570.560 - 5595.766: 0.5057% ( 10) 00:09:09.252 5595.766 - 5620.972: 0.5625% ( 10) 00:09:09.252 5620.972 - 5646.178: 0.6136% ( 9) 00:09:09.252 5646.178 - 5671.385: 0.6477% ( 6) 00:09:09.252 5671.385 - 5696.591: 0.6591% ( 2) 00:09:09.252 5696.591 - 5721.797: 0.6705% ( 2) 00:09:09.252 5721.797 - 5747.003: 0.6818% ( 2) 00:09:09.252 5747.003 - 5772.209: 0.6932% ( 2) 00:09:09.252 5772.209 - 5797.415: 0.6989% ( 1) 00:09:09.252 5797.415 - 5822.622: 0.7102% ( 2) 00:09:09.252 5822.622 - 5847.828: 0.7216% ( 2) 00:09:09.252 5847.828 - 5873.034: 0.7273% ( 1) 00:09:09.252 6024.271 - 6049.477: 0.7330% ( 1) 00:09:09.252 6125.095 - 6150.302: 0.7500% ( 3) 00:09:09.252 6150.302 - 6175.508: 0.7614% ( 2) 00:09:09.252 6175.508 - 6200.714: 0.7784% ( 3) 00:09:09.252 6200.714 - 6225.920: 0.7955% ( 3) 00:09:09.252 6251.126 - 6276.332: 0.8693% ( 13) 00:09:09.252 6276.332 - 6301.538: 0.9148% ( 8) 00:09:09.252 6301.538 - 6326.745: 0.9773% ( 11) 00:09:09.252 6326.745 - 6351.951: 1.1023% ( 22) 00:09:09.252 6351.951 - 6377.157: 1.2216% ( 21) 00:09:09.252 6377.157 - 6402.363: 1.4261% ( 36) 00:09:09.252 6402.363 - 6427.569: 1.7045% ( 49) 00:09:09.252 6427.569 - 6452.775: 2.0625% ( 63) 00:09:09.252 6452.775 - 6503.188: 3.2102% ( 202) 00:09:09.252 6503.188 - 6553.600: 4.8409% ( 287) 00:09:09.252 6553.600 - 6604.012: 7.9943% ( 555) 00:09:09.252 6604.012 - 6654.425: 13.0170% ( 884) 00:09:09.252 6654.425 - 6704.837: 19.0852% ( 1068) 00:09:09.252 6704.837 - 6755.249: 25.4318% ( 1117) 00:09:09.252 6755.249 - 6805.662: 34.2273% ( 1548) 00:09:09.252 6805.662 - 6856.074: 42.8864% ( 1524) 00:09:09.252 6856.074 - 6906.486: 51.6307% ( 1539) 00:09:09.252 6906.486 - 6956.898: 57.6307% ( 1056) 00:09:09.252 6956.898 - 7007.311: 61.8466% ( 742) 00:09:09.252 7007.311 - 7057.723: 66.0852% ( 746) 00:09:09.252 7057.723 - 7108.135: 69.9375% ( 678) 00:09:09.252 7108.135 - 7158.548: 72.8182% ( 507) 00:09:09.252 7158.548 - 7208.960: 75.2898% ( 435) 00:09:09.252 7208.960 - 7259.372: 77.6932% ( 423) 00:09:09.252 7259.372 - 7309.785: 79.1136% ( 250) 00:09:09.252 7309.785 - 7360.197: 80.4148% ( 229) 00:09:09.253 7360.197 - 7410.609: 81.4659% ( 185) 00:09:09.253 7410.609 - 7461.022: 82.8466% ( 243) 00:09:09.253 7461.022 - 7511.434: 84.0625% ( 214) 00:09:09.253 7511.434 - 7561.846: 84.7898% ( 128) 00:09:09.253 7561.846 - 7612.258: 85.1818% ( 69) 00:09:09.253 7612.258 - 7662.671: 85.6193% ( 77) 00:09:09.253 7662.671 - 7713.083: 86.0455% ( 75) 00:09:09.253 7713.083 - 7763.495: 86.6193% ( 101) 00:09:09.253 7763.495 - 7813.908: 87.0114% ( 69) 00:09:09.253 7813.908 - 7864.320: 87.5170% ( 89) 00:09:09.253 7864.320 - 7914.732: 88.1761% ( 116) 00:09:09.253 7914.732 - 7965.145: 88.7386% ( 99) 00:09:09.253 7965.145 - 8015.557: 89.1023% ( 64) 00:09:09.253 8015.557 - 8065.969: 89.6648% ( 99) 00:09:09.253 8065.969 - 8116.382: 90.0341% ( 65) 00:09:09.253 8116.382 - 8166.794: 90.6250% ( 104) 00:09:09.253 8166.794 - 8217.206: 91.0455% ( 74) 00:09:09.253 8217.206 - 8267.618: 91.4375% ( 69) 00:09:09.253 8267.618 - 8318.031: 91.9034% ( 82) 00:09:09.253 8318.031 - 8368.443: 92.1591% ( 45) 00:09:09.253 8368.443 - 8418.855: 92.4830% ( 57) 00:09:09.253 8418.855 - 8469.268: 92.8011% ( 56) 00:09:09.253 8469.268 - 8519.680: 93.0114% ( 37) 00:09:09.253 8519.680 - 8570.092: 93.3409% ( 58) 00:09:09.253 8570.092 - 8620.505: 93.6420% ( 53) 00:09:09.253 8620.505 - 8670.917: 93.8920% ( 44) 00:09:09.253 8670.917 - 8721.329: 94.0625% ( 30) 00:09:09.253 8721.329 - 8771.742: 94.3750% ( 55) 00:09:09.253 8771.742 - 8822.154: 94.6193% ( 43) 00:09:09.253 8822.154 - 8872.566: 94.8750% ( 45) 00:09:09.253 8872.566 - 8922.978: 94.9943% ( 21) 00:09:09.253 8922.978 - 8973.391: 95.1534% ( 28) 00:09:09.253 8973.391 - 9023.803: 95.3807% ( 40) 00:09:09.253 9023.803 - 9074.215: 95.5000% ( 21) 00:09:09.253 9074.215 - 9124.628: 95.6420% ( 25) 00:09:09.253 9124.628 - 9175.040: 96.0568% ( 73) 00:09:09.253 9175.040 - 9225.452: 96.2216% ( 29) 00:09:09.253 9225.452 - 9275.865: 96.3182% ( 17) 00:09:09.253 9275.865 - 9326.277: 96.5568% ( 42) 00:09:09.253 9326.277 - 9376.689: 96.6705% ( 20) 00:09:09.253 9376.689 - 9427.102: 96.7443% ( 13) 00:09:09.253 9427.102 - 9477.514: 96.8295% ( 15) 00:09:09.253 9477.514 - 9527.926: 96.9261% ( 17) 00:09:09.253 9527.926 - 9578.338: 97.0341% ( 19) 00:09:09.253 9578.338 - 9628.751: 97.2159% ( 32) 00:09:09.253 9628.751 - 9679.163: 97.2500% ( 6) 00:09:09.253 9679.163 - 9729.575: 97.2898% ( 7) 00:09:09.253 9729.575 - 9779.988: 97.3182% ( 5) 00:09:09.253 9779.988 - 9830.400: 97.3580% ( 7) 00:09:09.253 9830.400 - 9880.812: 97.3864% ( 5) 00:09:09.253 9880.812 - 9931.225: 97.4034% ( 3) 00:09:09.253 9931.225 - 9981.637: 97.4091% ( 1) 00:09:09.253 9981.637 - 10032.049: 97.4205% ( 2) 00:09:09.253 10032.049 - 10082.462: 97.4318% ( 2) 00:09:09.253 10082.462 - 10132.874: 97.4375% ( 1) 00:09:09.253 10132.874 - 10183.286: 97.4489% ( 2) 00:09:09.253 10183.286 - 10233.698: 97.4545% ( 1) 00:09:09.253 10838.646 - 10889.058: 97.4716% ( 3) 00:09:09.253 10889.058 - 10939.471: 97.4773% ( 1) 00:09:09.253 10939.471 - 10989.883: 97.4830% ( 1) 00:09:09.253 10989.883 - 11040.295: 97.5057% ( 4) 00:09:09.253 11040.295 - 11090.708: 97.5625% ( 10) 00:09:09.253 11090.708 - 11141.120: 97.6364% ( 13) 00:09:09.253 11141.120 - 11191.532: 97.7273% ( 16) 00:09:09.253 11191.532 - 11241.945: 97.8295% ( 18) 00:09:09.253 11241.945 - 11292.357: 97.9205% ( 16) 00:09:09.253 11292.357 - 11342.769: 97.9602% ( 7) 00:09:09.253 11342.769 - 11393.182: 98.0284% ( 12) 00:09:09.253 11393.182 - 11443.594: 98.0909% ( 11) 00:09:09.253 11443.594 - 11494.006: 98.2045% ( 20) 00:09:09.253 11494.006 - 11544.418: 98.3750% ( 30) 00:09:09.253 11544.418 - 11594.831: 98.4659% ( 16) 00:09:09.253 11594.831 - 11645.243: 98.5739% ( 19) 00:09:09.253 11645.243 - 11695.655: 98.6591% ( 15) 00:09:09.253 11695.655 - 11746.068: 98.7330% ( 13) 00:09:09.253 11746.068 - 11796.480: 98.7784% ( 8) 00:09:09.253 11796.480 - 11846.892: 98.8125% ( 6) 00:09:09.253 11846.892 - 11897.305: 98.8523% ( 7) 00:09:09.253 11897.305 - 11947.717: 98.8693% ( 3) 00:09:09.253 11947.717 - 11998.129: 98.8807% ( 2) 00:09:09.253 11998.129 - 12048.542: 98.8920% ( 2) 00:09:09.253 12048.542 - 12098.954: 98.8977% ( 1) 00:09:09.253 12098.954 - 12149.366: 98.9091% ( 2) 00:09:09.253 13006.375 - 13107.200: 98.9205% ( 2) 00:09:09.253 13107.200 - 13208.025: 98.9716% ( 9) 00:09:09.253 13208.025 - 13308.849: 99.2159% ( 43) 00:09:09.253 13308.849 - 13409.674: 99.2557% ( 7) 00:09:09.253 13409.674 - 13510.498: 99.2727% ( 3) 00:09:09.253 17442.658 - 17543.483: 99.2784% ( 1) 00:09:09.253 17745.132 - 17845.957: 99.3068% ( 5) 00:09:09.253 17845.957 - 17946.782: 99.3295% ( 4) 00:09:09.253 17946.782 - 18047.606: 99.3523% ( 4) 00:09:09.253 18047.606 - 18148.431: 99.3807% ( 5) 00:09:09.253 18148.431 - 18249.255: 99.4091% ( 5) 00:09:09.253 18249.255 - 18350.080: 99.4432% ( 6) 00:09:09.253 18350.080 - 18450.905: 99.5455% ( 18) 00:09:09.253 18450.905 - 18551.729: 99.5625% ( 3) 00:09:09.253 18551.729 - 18652.554: 99.5852% ( 4) 00:09:09.253 18652.554 - 18753.378: 99.6080% ( 4) 00:09:09.253 18753.378 - 18854.203: 99.6250% ( 3) 00:09:09.253 18854.203 - 18955.028: 99.6364% ( 2) 00:09:09.253 23088.837 - 23189.662: 99.6534% ( 3) 00:09:09.253 23189.662 - 23290.486: 99.6818% ( 5) 00:09:09.253 23290.486 - 23391.311: 99.7102% ( 5) 00:09:09.253 23391.311 - 23492.135: 99.7500% ( 7) 00:09:09.253 23492.135 - 23592.960: 99.7841% ( 6) 00:09:09.253 23592.960 - 23693.785: 99.8750% ( 16) 00:09:09.253 23693.785 - 23794.609: 99.9375% ( 11) 00:09:09.253 23794.609 - 23895.434: 99.9545% ( 3) 00:09:09.253 23895.434 - 23996.258: 99.9773% ( 4) 00:09:09.253 23996.258 - 24097.083: 100.0000% ( 4) 00:09:09.253 00:09:09.253 04:55:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:09.253 00:09:09.253 real 0m2.461s 00:09:09.253 user 0m2.178s 00:09:09.253 sys 0m0.180s 00:09:09.253 ************************************ 00:09:09.253 END TEST nvme_perf 00:09:09.253 ************************************ 00:09:09.253 04:55:25 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.253 04:55:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 04:55:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:09.253 04:55:25 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:09.253 04:55:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.253 04:55:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.253 ************************************ 00:09:09.253 START TEST nvme_hello_world 00:09:09.253 ************************************ 00:09:09.253 04:55:25 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:09.253 Initializing NVMe Controllers 00:09:09.253 Attached to 0000:00:10.0 00:09:09.253 Namespace ID: 1 size: 6GB 00:09:09.253 Attached to 0000:00:11.0 00:09:09.253 Namespace ID: 1 size: 5GB 00:09:09.253 Attached to 0000:00:13.0 00:09:09.253 Namespace ID: 1 size: 1GB 00:09:09.253 Attached to 0000:00:12.0 00:09:09.253 Namespace ID: 1 size: 4GB 00:09:09.253 Namespace ID: 2 size: 4GB 00:09:09.253 Namespace ID: 3 size: 4GB 00:09:09.253 Initialization complete. 00:09:09.253 INFO: using host memory buffer for IO 00:09:09.253 Hello world! 00:09:09.253 INFO: using host memory buffer for IO 00:09:09.253 Hello world! 00:09:09.253 INFO: using host memory buffer for IO 00:09:09.253 Hello world! 00:09:09.253 INFO: using host memory buffer for IO 00:09:09.254 Hello world! 00:09:09.254 INFO: using host memory buffer for IO 00:09:09.254 Hello world! 00:09:09.254 INFO: using host memory buffer for IO 00:09:09.254 Hello world! 00:09:09.254 00:09:09.254 real 0m0.198s 00:09:09.254 user 0m0.073s 00:09:09.254 sys 0m0.086s 00:09:09.254 04:55:25 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.254 04:55:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:09.254 ************************************ 00:09:09.254 END TEST nvme_hello_world 00:09:09.254 ************************************ 00:09:09.254 04:55:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:09.254 04:55:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.254 04:55:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.254 04:55:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.254 ************************************ 00:09:09.254 START TEST nvme_sgl 00:09:09.254 ************************************ 00:09:09.254 04:55:25 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:09.512 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:09.512 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:09.512 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:09.512 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:09.512 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:09.512 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:09.512 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:09.512 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:09.512 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:09.512 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:09.512 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:09.512 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:09.512 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:09.512 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:09.512 NVMe Readv/Writev Request test 00:09:09.512 Attached to 0000:00:10.0 00:09:09.512 Attached to 0000:00:11.0 00:09:09.512 Attached to 0000:00:13.0 00:09:09.512 Attached to 0000:00:12.0 00:09:09.512 0000:00:10.0: build_io_request_2 test passed 00:09:09.512 0000:00:10.0: build_io_request_4 test passed 00:09:09.512 0000:00:10.0: build_io_request_5 test passed 00:09:09.512 0000:00:10.0: build_io_request_6 test passed 00:09:09.512 0000:00:10.0: build_io_request_7 test passed 00:09:09.512 0000:00:10.0: build_io_request_10 test passed 00:09:09.512 0000:00:11.0: build_io_request_2 test passed 00:09:09.512 0000:00:11.0: build_io_request_4 test passed 00:09:09.512 0000:00:11.0: build_io_request_5 test passed 00:09:09.512 0000:00:11.0: build_io_request_6 test passed 00:09:09.512 0000:00:11.0: build_io_request_7 test passed 00:09:09.512 0000:00:11.0: build_io_request_10 test passed 00:09:09.512 Cleaning up... 00:09:09.512 00:09:09.512 real 0m0.263s 00:09:09.512 user 0m0.118s 00:09:09.512 sys 0m0.100s 00:09:09.512 04:55:26 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.512 ************************************ 00:09:09.512 END TEST nvme_sgl 00:09:09.512 ************************************ 00:09:09.512 04:55:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 04:55:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:09.512 04:55:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.512 04:55:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.512 04:55:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.512 ************************************ 00:09:09.512 START TEST nvme_e2edp 00:09:09.512 ************************************ 00:09:09.512 04:55:26 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:09.770 NVMe Write/Read with End-to-End data protection test 00:09:09.770 Attached to 0000:00:10.0 00:09:09.770 Attached to 0000:00:11.0 00:09:09.770 Attached to 0000:00:13.0 00:09:09.770 Attached to 0000:00:12.0 00:09:09.770 Cleaning up... 00:09:09.770 00:09:09.771 real 0m0.190s 00:09:09.771 user 0m0.065s 00:09:09.771 sys 0m0.085s 00:09:09.771 04:55:26 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.771 ************************************ 00:09:09.771 END TEST nvme_e2edp 00:09:09.771 ************************************ 00:09:09.771 04:55:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:09.771 04:55:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:09.771 04:55:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.771 04:55:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.771 04:55:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.771 ************************************ 00:09:09.771 START TEST nvme_reserve 00:09:09.771 ************************************ 00:09:09.771 04:55:26 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:10.028 ===================================================== 00:09:10.028 NVMe Controller at PCI bus 0, device 16, function 0 00:09:10.028 ===================================================== 00:09:10.028 Reservations: Not Supported 00:09:10.028 ===================================================== 00:09:10.028 NVMe Controller at PCI bus 0, device 17, function 0 00:09:10.028 ===================================================== 00:09:10.028 Reservations: Not Supported 00:09:10.028 ===================================================== 00:09:10.028 NVMe Controller at PCI bus 0, device 19, function 0 00:09:10.028 ===================================================== 00:09:10.028 Reservations: Not Supported 00:09:10.028 ===================================================== 00:09:10.028 NVMe Controller at PCI bus 0, device 18, function 0 00:09:10.028 ===================================================== 00:09:10.028 Reservations: Not Supported 00:09:10.029 Reservation test passed 00:09:10.029 00:09:10.029 real 0m0.188s 00:09:10.029 user 0m0.077s 00:09:10.029 sys 0m0.073s 00:09:10.029 04:55:26 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.029 04:55:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 ************************************ 00:09:10.029 END TEST nvme_reserve 00:09:10.029 ************************************ 00:09:10.029 04:55:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:10.029 04:55:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.029 04:55:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.029 04:55:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.029 ************************************ 00:09:10.029 START TEST nvme_err_injection 00:09:10.029 ************************************ 00:09:10.029 04:55:26 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:10.287 NVMe Error Injection test 00:09:10.287 Attached to 0000:00:10.0 00:09:10.287 Attached to 0000:00:11.0 00:09:10.287 Attached to 0000:00:13.0 00:09:10.287 Attached to 0000:00:12.0 00:09:10.287 0000:00:13.0: get features failed as expected 00:09:10.287 0000:00:12.0: get features failed as expected 00:09:10.287 0000:00:10.0: get features failed as expected 00:09:10.287 0000:00:11.0: get features failed as expected 00:09:10.287 0000:00:10.0: get features successfully as expected 00:09:10.287 0000:00:11.0: get features successfully as expected 00:09:10.287 0000:00:13.0: get features successfully as expected 00:09:10.287 0000:00:12.0: get features successfully as expected 00:09:10.287 0000:00:13.0: read failed as expected 00:09:10.287 0000:00:12.0: read failed as expected 00:09:10.287 0000:00:10.0: read failed as expected 00:09:10.287 0000:00:11.0: read failed as expected 00:09:10.287 0000:00:11.0: read successfully as expected 00:09:10.287 0000:00:13.0: read successfully as expected 00:09:10.287 0000:00:10.0: read successfully as expected 00:09:10.287 0000:00:12.0: read successfully as expected 00:09:10.287 Cleaning up... 00:09:10.287 00:09:10.287 real 0m0.199s 00:09:10.287 user 0m0.070s 00:09:10.287 sys 0m0.087s 00:09:10.287 04:55:26 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.287 ************************************ 00:09:10.287 END TEST nvme_err_injection 00:09:10.287 ************************************ 00:09:10.287 04:55:26 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:10.287 04:55:26 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:10.287 04:55:26 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:09:10.287 04:55:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.287 04:55:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.287 ************************************ 00:09:10.287 START TEST nvme_overhead 00:09:10.287 ************************************ 00:09:10.287 04:55:26 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:11.660 Initializing NVMe Controllers 00:09:11.660 Attached to 0000:00:10.0 00:09:11.660 Attached to 0000:00:11.0 00:09:11.660 Attached to 0000:00:13.0 00:09:11.660 Attached to 0000:00:12.0 00:09:11.660 Initialization complete. Launching workers. 00:09:11.660 submit (in ns) avg, min, max = 12015.3, 10601.5, 105237.7 00:09:11.660 complete (in ns) avg, min, max = 7533.5, 7150.8, 207255.4 00:09:11.660 00:09:11.660 Submit histogram 00:09:11.660 ================ 00:09:11.660 Range in us Cumulative Count 00:09:11.660 10.585 - 10.634: 0.0056% ( 1) 00:09:11.660 10.782 - 10.831: 0.0112% ( 1) 00:09:11.660 11.372 - 11.422: 0.0674% ( 10) 00:09:11.660 11.422 - 11.471: 0.4604% ( 70) 00:09:11.660 11.471 - 11.520: 2.2737% ( 323) 00:09:11.660 11.520 - 11.569: 7.9272% ( 1007) 00:09:11.660 11.569 - 11.618: 18.2349% ( 1836) 00:09:11.660 11.618 - 11.668: 31.1700% ( 2304) 00:09:11.660 11.668 - 11.717: 43.5661% ( 2208) 00:09:11.660 11.717 - 11.766: 54.3735% ( 1925) 00:09:11.660 11.766 - 11.815: 62.8060% ( 1502) 00:09:11.660 11.815 - 11.865: 69.1051% ( 1122) 00:09:11.660 11.865 - 11.914: 73.6526% ( 810) 00:09:11.660 11.914 - 11.963: 77.1951% ( 631) 00:09:11.660 11.963 - 12.012: 79.9349% ( 488) 00:09:11.660 12.012 - 12.062: 82.2816% ( 418) 00:09:11.660 12.062 - 12.111: 83.9434% ( 296) 00:09:11.660 12.111 - 12.160: 85.1954% ( 223) 00:09:11.660 12.160 - 12.209: 86.3294% ( 202) 00:09:11.660 12.209 - 12.258: 87.2951% ( 172) 00:09:11.660 12.258 - 12.308: 88.0755% ( 139) 00:09:11.660 12.308 - 12.357: 88.7155% ( 114) 00:09:11.660 12.357 - 12.406: 89.3386% ( 111) 00:09:11.660 12.406 - 12.455: 89.8945% ( 99) 00:09:11.660 12.455 - 12.505: 90.4671% ( 102) 00:09:11.660 12.505 - 12.554: 91.1071% ( 114) 00:09:11.660 12.554 - 12.603: 91.7864% ( 121) 00:09:11.660 12.603 - 12.702: 93.1507% ( 243) 00:09:11.660 12.702 - 12.800: 94.3634% ( 216) 00:09:11.660 12.800 - 12.898: 95.2728% ( 162) 00:09:11.660 12.898 - 12.997: 95.7725% ( 89) 00:09:11.660 12.997 - 13.095: 96.1599% ( 69) 00:09:11.660 13.095 - 13.194: 96.3002% ( 25) 00:09:11.660 13.194 - 13.292: 96.4518% ( 27) 00:09:11.660 13.292 - 13.391: 96.5248% ( 13) 00:09:11.660 13.391 - 13.489: 96.5529% ( 5) 00:09:11.660 13.489 - 13.588: 96.6371% ( 15) 00:09:11.660 13.588 - 13.686: 96.7382% ( 18) 00:09:11.660 13.686 - 13.785: 96.8673% ( 23) 00:09:11.660 13.785 - 13.883: 96.9796% ( 20) 00:09:11.660 13.883 - 13.982: 97.0918% ( 20) 00:09:11.660 13.982 - 14.080: 97.2883% ( 35) 00:09:11.660 14.080 - 14.178: 97.4399% ( 27) 00:09:11.660 14.178 - 14.277: 97.5298% ( 16) 00:09:11.660 14.277 - 14.375: 97.6196% ( 16) 00:09:11.660 14.375 - 14.474: 97.6870% ( 12) 00:09:11.660 14.474 - 14.572: 97.7375% ( 9) 00:09:11.660 14.572 - 14.671: 97.8217% ( 15) 00:09:11.660 14.671 - 14.769: 97.8778% ( 10) 00:09:11.660 14.769 - 14.868: 97.9059% ( 5) 00:09:11.660 14.868 - 14.966: 97.9396% ( 6) 00:09:11.660 14.966 - 15.065: 97.9677% ( 5) 00:09:11.660 15.065 - 15.163: 98.0013% ( 6) 00:09:11.660 15.163 - 15.262: 98.0294% ( 5) 00:09:11.660 15.262 - 15.360: 98.0631% ( 6) 00:09:11.661 15.360 - 15.458: 98.1080% ( 8) 00:09:11.661 15.458 - 15.557: 98.1249% ( 3) 00:09:11.661 15.557 - 15.655: 98.1361% ( 2) 00:09:11.661 15.655 - 15.754: 98.1754% ( 7) 00:09:11.661 15.754 - 15.852: 98.2315% ( 10) 00:09:11.661 15.852 - 15.951: 98.2652% ( 6) 00:09:11.661 15.951 - 16.049: 98.2933% ( 5) 00:09:11.661 16.049 - 16.148: 98.3214% ( 5) 00:09:11.661 16.148 - 16.246: 98.3382% ( 3) 00:09:11.661 16.246 - 16.345: 98.3494% ( 2) 00:09:11.661 16.345 - 16.443: 98.3831% ( 6) 00:09:11.661 16.443 - 16.542: 98.4000% ( 3) 00:09:11.661 16.542 - 16.640: 98.4056% ( 1) 00:09:11.661 16.738 - 16.837: 98.4112% ( 1) 00:09:11.661 16.837 - 16.935: 98.4280% ( 3) 00:09:11.661 16.935 - 17.034: 98.4336% ( 1) 00:09:11.661 17.034 - 17.132: 98.4505% ( 3) 00:09:11.661 17.132 - 17.231: 98.4842% ( 6) 00:09:11.661 17.231 - 17.329: 98.5572% ( 13) 00:09:11.661 17.329 - 17.428: 98.6414% ( 15) 00:09:11.661 17.428 - 17.526: 98.6694% ( 5) 00:09:11.661 17.526 - 17.625: 98.7368% ( 12) 00:09:11.661 17.625 - 17.723: 98.8210% ( 15) 00:09:11.661 17.723 - 17.822: 98.8940% ( 13) 00:09:11.661 17.822 - 17.920: 98.9894% ( 17) 00:09:11.661 17.920 - 18.018: 99.0793% ( 16) 00:09:11.661 18.018 - 18.117: 99.1410% ( 11) 00:09:11.661 18.117 - 18.215: 99.2309% ( 16) 00:09:11.661 18.215 - 18.314: 99.2926% ( 11) 00:09:11.661 18.314 - 18.412: 99.3656% ( 13) 00:09:11.661 18.412 - 18.511: 99.4274% ( 11) 00:09:11.661 18.511 - 18.609: 99.4667% ( 7) 00:09:11.661 18.609 - 18.708: 99.5284% ( 11) 00:09:11.661 18.708 - 18.806: 99.5845% ( 10) 00:09:11.661 18.806 - 18.905: 99.6182% ( 6) 00:09:11.661 18.905 - 19.003: 99.6295% ( 2) 00:09:11.661 19.003 - 19.102: 99.6463% ( 3) 00:09:11.661 19.102 - 19.200: 99.6856% ( 7) 00:09:11.661 19.298 - 19.397: 99.7081% ( 4) 00:09:11.661 19.397 - 19.495: 99.7193% ( 2) 00:09:11.661 19.495 - 19.594: 99.7361% ( 3) 00:09:11.661 19.594 - 19.692: 99.7698% ( 6) 00:09:11.661 19.692 - 19.791: 99.7923% ( 4) 00:09:11.661 19.791 - 19.889: 99.7979% ( 1) 00:09:11.661 19.889 - 19.988: 99.8035% ( 1) 00:09:11.661 20.185 - 20.283: 99.8147% ( 2) 00:09:11.661 20.578 - 20.677: 99.8260% ( 2) 00:09:11.661 20.775 - 20.874: 99.8372% ( 2) 00:09:11.661 20.972 - 21.071: 99.8428% ( 1) 00:09:11.661 21.071 - 21.169: 99.8484% ( 1) 00:09:11.661 21.760 - 21.858: 99.8596% ( 2) 00:09:11.661 21.858 - 21.957: 99.8653% ( 1) 00:09:11.661 22.843 - 22.942: 99.8765% ( 2) 00:09:11.661 23.040 - 23.138: 99.8821% ( 1) 00:09:11.661 23.237 - 23.335: 99.8877% ( 1) 00:09:11.661 23.434 - 23.532: 99.8933% ( 1) 00:09:11.661 24.714 - 24.812: 99.8989% ( 1) 00:09:11.661 25.206 - 25.403: 99.9046% ( 1) 00:09:11.661 25.403 - 25.600: 99.9102% ( 1) 00:09:11.661 25.600 - 25.797: 99.9214% ( 2) 00:09:11.661 27.372 - 27.569: 99.9326% ( 2) 00:09:11.661 32.098 - 32.295: 99.9382% ( 1) 00:09:11.661 33.280 - 33.477: 99.9439% ( 1) 00:09:11.661 37.612 - 37.809: 99.9495% ( 1) 00:09:11.661 38.991 - 39.188: 99.9551% ( 1) 00:09:11.661 49.625 - 49.822: 99.9607% ( 1) 00:09:11.661 52.775 - 53.169: 99.9663% ( 1) 00:09:11.661 57.502 - 57.895: 99.9719% ( 1) 00:09:11.661 59.077 - 59.471: 99.9775% ( 1) 00:09:11.661 61.440 - 61.834: 99.9832% ( 1) 00:09:11.661 62.622 - 63.015: 99.9888% ( 1) 00:09:11.661 76.800 - 77.194: 99.9944% ( 1) 00:09:11.661 104.763 - 105.551: 100.0000% ( 1) 00:09:11.661 00:09:11.661 Complete histogram 00:09:11.661 ================== 00:09:11.661 Range in us Cumulative Count 00:09:11.661 7.138 - 7.188: 0.1797% ( 32) 00:09:11.661 7.188 - 7.237: 3.5538% ( 601) 00:09:11.661 7.237 - 7.286: 15.3997% ( 2110) 00:09:11.661 7.286 - 7.335: 37.6768% ( 3968) 00:09:11.661 7.335 - 7.385: 60.3750% ( 4043) 00:09:11.661 7.385 - 7.434: 76.9032% ( 2944) 00:09:11.661 7.434 - 7.483: 86.7505% ( 1754) 00:09:11.661 7.483 - 7.532: 91.4777% ( 842) 00:09:11.661 7.532 - 7.582: 94.0433% ( 457) 00:09:11.661 7.582 - 7.631: 95.3683% ( 236) 00:09:11.661 7.631 - 7.680: 95.9578% ( 105) 00:09:11.661 7.680 - 7.729: 96.2722% ( 56) 00:09:11.661 7.729 - 7.778: 96.4350% ( 29) 00:09:11.661 7.778 - 7.828: 96.5417% ( 19) 00:09:11.661 7.828 - 7.877: 96.6427% ( 18) 00:09:11.661 7.877 - 7.926: 96.6989% ( 10) 00:09:11.661 7.926 - 7.975: 96.7550% ( 10) 00:09:11.661 7.975 - 8.025: 96.8448% ( 16) 00:09:11.661 8.025 - 8.074: 96.9571% ( 20) 00:09:11.661 8.074 - 8.123: 97.1648% ( 37) 00:09:11.661 8.123 - 8.172: 97.3052% ( 25) 00:09:11.661 8.172 - 8.222: 97.5522% ( 44) 00:09:11.661 8.222 - 8.271: 97.6813% ( 23) 00:09:11.661 8.271 - 8.320: 97.7656% ( 15) 00:09:11.661 8.320 - 8.369: 97.8217% ( 10) 00:09:11.661 8.369 - 8.418: 97.8610% ( 7) 00:09:11.661 8.418 - 8.468: 97.8778% ( 3) 00:09:11.661 8.468 - 8.517: 97.9003% ( 4) 00:09:11.661 8.517 - 8.566: 97.9284% ( 5) 00:09:11.661 8.566 - 8.615: 97.9340% ( 1) 00:09:11.661 8.615 - 8.665: 97.9564% ( 4) 00:09:11.661 9.058 - 9.108: 97.9677% ( 2) 00:09:11.661 9.108 - 9.157: 97.9733% ( 1) 00:09:11.661 9.157 - 9.206: 97.9845% ( 2) 00:09:11.661 9.255 - 9.305: 97.9957% ( 2) 00:09:11.661 9.305 - 9.354: 98.0070% ( 2) 00:09:11.661 9.452 - 9.502: 98.0126% ( 1) 00:09:11.661 9.502 - 9.551: 98.0238% ( 2) 00:09:11.661 9.551 - 9.600: 98.0294% ( 1) 00:09:11.661 9.600 - 9.649: 98.0350% ( 1) 00:09:11.661 9.649 - 9.698: 98.0406% ( 1) 00:09:11.661 9.698 - 9.748: 98.0631% ( 4) 00:09:11.661 9.748 - 9.797: 98.0687% ( 1) 00:09:11.661 9.797 - 9.846: 98.0743% ( 1) 00:09:11.661 9.846 - 9.895: 98.0856% ( 2) 00:09:11.661 9.895 - 9.945: 98.1024% ( 3) 00:09:11.661 10.043 - 10.092: 98.1192% ( 3) 00:09:11.661 10.092 - 10.142: 98.1417% ( 4) 00:09:11.661 10.142 - 10.191: 98.1529% ( 2) 00:09:11.661 10.191 - 10.240: 98.1810% ( 5) 00:09:11.661 10.240 - 10.289: 98.1866% ( 1) 00:09:11.661 10.289 - 10.338: 98.1922% ( 1) 00:09:11.661 10.388 - 10.437: 98.2035% ( 2) 00:09:11.661 10.437 - 10.486: 98.2091% ( 1) 00:09:11.661 10.486 - 10.535: 98.2203% ( 2) 00:09:11.661 10.535 - 10.585: 98.2315% ( 2) 00:09:11.661 11.077 - 11.126: 98.2371% ( 1) 00:09:11.661 11.126 - 11.175: 98.2428% ( 1) 00:09:11.661 11.175 - 11.225: 98.2484% ( 1) 00:09:11.661 11.815 - 11.865: 98.2596% ( 2) 00:09:11.661 11.914 - 11.963: 98.2652% ( 1) 00:09:11.661 11.963 - 12.012: 98.2708% ( 1) 00:09:11.661 12.406 - 12.455: 98.2764% ( 1) 00:09:11.661 12.455 - 12.505: 98.2821% ( 1) 00:09:11.661 12.554 - 12.603: 98.2877% ( 1) 00:09:11.661 12.603 - 12.702: 98.3045% ( 3) 00:09:11.661 12.702 - 12.800: 98.3326% ( 5) 00:09:11.661 12.800 - 12.898: 98.3887% ( 10) 00:09:11.661 12.898 - 12.997: 98.5179% ( 23) 00:09:11.661 12.997 - 13.095: 98.5515% ( 6) 00:09:11.661 13.095 - 13.194: 98.6077% ( 10) 00:09:11.661 13.194 - 13.292: 98.6582% ( 9) 00:09:11.661 13.292 - 13.391: 98.7424% ( 15) 00:09:11.661 13.391 - 13.489: 98.8659% ( 22) 00:09:11.661 13.489 - 13.588: 98.9614% ( 17) 00:09:11.661 13.588 - 13.686: 99.0063% ( 8) 00:09:11.661 13.686 - 13.785: 99.0680% ( 11) 00:09:11.661 13.785 - 13.883: 99.1242% ( 10) 00:09:11.661 13.883 - 13.982: 99.1859% ( 11) 00:09:11.661 13.982 - 14.080: 99.2589% ( 13) 00:09:11.661 14.080 - 14.178: 99.3431% ( 15) 00:09:11.661 14.178 - 14.277: 99.3824% ( 7) 00:09:11.661 14.277 - 14.375: 99.4330% ( 9) 00:09:11.661 14.375 - 14.474: 99.4835% ( 9) 00:09:11.661 14.474 - 14.572: 99.5060% ( 4) 00:09:11.661 14.572 - 14.671: 99.5396% ( 6) 00:09:11.661 14.671 - 14.769: 99.5621% ( 4) 00:09:11.661 14.769 - 14.868: 99.6014% ( 7) 00:09:11.661 14.868 - 14.966: 99.6351% ( 6) 00:09:11.661 14.966 - 15.065: 99.6407% ( 1) 00:09:11.661 15.065 - 15.163: 99.6519% ( 2) 00:09:11.661 15.163 - 15.262: 99.6575% ( 1) 00:09:11.661 15.262 - 15.360: 99.6744% ( 3) 00:09:11.661 15.360 - 15.458: 99.6856% ( 2) 00:09:11.661 15.458 - 15.557: 99.6968% ( 2) 00:09:11.661 15.655 - 15.754: 99.7249% ( 5) 00:09:11.661 15.754 - 15.852: 99.7474% ( 4) 00:09:11.661 15.852 - 15.951: 99.7530% ( 1) 00:09:11.661 15.951 - 16.049: 99.7586% ( 1) 00:09:11.661 16.148 - 16.246: 99.7698% ( 2) 00:09:11.661 16.246 - 16.345: 99.7754% ( 1) 00:09:11.661 16.345 - 16.443: 99.7810% ( 1) 00:09:11.661 16.443 - 16.542: 99.7867% ( 1) 00:09:11.661 16.542 - 16.640: 99.7923% ( 1) 00:09:11.661 16.837 - 16.935: 99.7979% ( 1) 00:09:11.661 17.034 - 17.132: 99.8035% ( 1) 00:09:11.661 17.132 - 17.231: 99.8091% ( 1) 00:09:11.661 17.231 - 17.329: 99.8203% ( 2) 00:09:11.661 17.822 - 17.920: 99.8260% ( 1) 00:09:11.661 18.117 - 18.215: 99.8484% ( 4) 00:09:11.661 18.412 - 18.511: 99.8540% ( 1) 00:09:11.661 18.511 - 18.609: 99.8596% ( 1) 00:09:11.661 18.609 - 18.708: 99.8653% ( 1) 00:09:11.662 18.708 - 18.806: 99.8709% ( 1) 00:09:11.662 18.806 - 18.905: 99.8765% ( 1) 00:09:11.662 18.905 - 19.003: 99.8821% ( 1) 00:09:11.662 19.003 - 19.102: 99.8877% ( 1) 00:09:11.662 19.298 - 19.397: 99.8933% ( 1) 00:09:11.662 19.397 - 19.495: 99.8989% ( 1) 00:09:11.662 20.185 - 20.283: 99.9046% ( 1) 00:09:11.662 20.874 - 20.972: 99.9102% ( 1) 00:09:11.662 21.169 - 21.268: 99.9214% ( 2) 00:09:11.662 21.366 - 21.465: 99.9270% ( 1) 00:09:11.662 21.563 - 21.662: 99.9382% ( 2) 00:09:11.662 21.957 - 22.055: 99.9439% ( 1) 00:09:11.662 22.351 - 22.449: 99.9495% ( 1) 00:09:11.662 23.237 - 23.335: 99.9551% ( 1) 00:09:11.662 24.222 - 24.320: 99.9607% ( 1) 00:09:11.662 25.403 - 25.600: 99.9663% ( 1) 00:09:11.662 27.963 - 28.160: 99.9719% ( 1) 00:09:11.662 29.538 - 29.735: 99.9775% ( 1) 00:09:11.662 35.643 - 35.840: 99.9832% ( 1) 00:09:11.662 51.988 - 52.382: 99.9888% ( 1) 00:09:11.662 120.517 - 121.305: 99.9944% ( 1) 00:09:11.662 206.375 - 207.951: 100.0000% ( 1) 00:09:11.662 00:09:11.662 00:09:11.662 real 0m1.192s 00:09:11.662 user 0m1.063s 00:09:11.662 sys 0m0.083s 00:09:11.662 04:55:28 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.662 ************************************ 00:09:11.662 END TEST nvme_overhead 00:09:11.662 ************************************ 00:09:11.662 04:55:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:11.662 04:55:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:11.662 04:55:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:11.662 04:55:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.662 04:55:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.662 ************************************ 00:09:11.662 START TEST nvme_arbitration 00:09:11.662 ************************************ 00:09:11.662 04:55:28 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:14.951 Initializing NVMe Controllers 00:09:14.951 Attached to 0000:00:10.0 00:09:14.951 Attached to 0000:00:11.0 00:09:14.951 Attached to 0000:00:13.0 00:09:14.951 Attached to 0000:00:12.0 00:09:14.951 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:14.951 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:14.951 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:14.951 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:14.951 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:14.951 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:14.951 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:14.951 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:14.951 Initialization complete. Launching workers. 00:09:14.951 Starting thread on core 1 with urgent priority queue 00:09:14.951 Starting thread on core 2 with urgent priority queue 00:09:14.951 Starting thread on core 3 with urgent priority queue 00:09:14.951 Starting thread on core 0 with urgent priority queue 00:09:14.951 QEMU NVMe Ctrl (12340 ) core 0: 6134.00 IO/s 16.30 secs/100000 ios 00:09:14.951 QEMU NVMe Ctrl (12342 ) core 0: 6120.00 IO/s 16.34 secs/100000 ios 00:09:14.951 QEMU NVMe Ctrl (12341 ) core 1: 6148.00 IO/s 16.27 secs/100000 ios 00:09:14.951 QEMU NVMe Ctrl (12342 ) core 1: 6144.00 IO/s 16.28 secs/100000 ios 00:09:14.951 QEMU NVMe Ctrl (12343 ) core 2: 5723.33 IO/s 17.47 secs/100000 ios 00:09:14.951 QEMU NVMe Ctrl (12342 ) core 3: 5714.67 IO/s 17.50 secs/100000 ios 00:09:14.951 ======================================================== 00:09:14.951 00:09:14.951 00:09:14.951 real 0m3.225s 00:09:14.951 user 0m9.025s 00:09:14.951 sys 0m0.107s 00:09:14.951 04:55:31 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.951 04:55:31 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:14.951 ************************************ 00:09:14.951 END TEST nvme_arbitration 00:09:14.951 ************************************ 00:09:14.951 04:55:31 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:14.951 04:55:31 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:14.951 04:55:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.951 04:55:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.951 ************************************ 00:09:14.951 START TEST nvme_single_aen 00:09:14.951 ************************************ 00:09:14.951 04:55:31 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:14.951 Asynchronous Event Request test 00:09:14.951 Attached to 0000:00:10.0 00:09:14.951 Attached to 0000:00:11.0 00:09:14.951 Attached to 0000:00:13.0 00:09:14.951 Attached to 0000:00:12.0 00:09:14.951 Reset controller to setup AER completions for this process 00:09:14.951 Registering asynchronous event callbacks... 00:09:14.951 Getting orig temperature thresholds of all controllers 00:09:14.951 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.951 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.951 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.951 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:14.951 Setting all controllers temperature threshold low to trigger AER 00:09:14.951 Waiting for all controllers temperature threshold to be set lower 00:09:14.951 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.951 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:14.951 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.951 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:14.951 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.951 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:14.951 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:14.951 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:14.951 Waiting for all controllers to trigger AER and reset threshold 00:09:14.951 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.951 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.951 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.951 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:14.951 Cleaning up... 00:09:14.951 00:09:14.951 real 0m0.223s 00:09:14.951 user 0m0.080s 00:09:14.951 sys 0m0.093s 00:09:14.951 ************************************ 00:09:14.951 END TEST nvme_single_aen 00:09:14.951 ************************************ 00:09:14.951 04:55:31 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.951 04:55:31 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:15.213 04:55:31 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:15.213 04:55:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.213 04:55:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.213 04:55:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.213 ************************************ 00:09:15.213 START TEST nvme_doorbell_aers 00:09:15.213 ************************************ 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:15.213 04:55:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:15.474 [2024-11-21 04:55:31.986649] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:25.464 Executing: test_write_invalid_db 00:09:25.464 Waiting for AER completion... 00:09:25.464 Failure: test_write_invalid_db 00:09:25.464 00:09:25.464 Executing: test_invalid_db_write_overflow_sq 00:09:25.464 Waiting for AER completion... 00:09:25.464 Failure: test_invalid_db_write_overflow_sq 00:09:25.464 00:09:25.464 Executing: test_invalid_db_write_overflow_cq 00:09:25.464 Waiting for AER completion... 00:09:25.464 Failure: test_invalid_db_write_overflow_cq 00:09:25.464 00:09:25.464 04:55:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:25.464 04:55:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:25.465 [2024-11-21 04:55:42.011163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:35.432 Executing: test_write_invalid_db 00:09:35.432 Waiting for AER completion... 00:09:35.432 Failure: test_write_invalid_db 00:09:35.432 00:09:35.432 Executing: test_invalid_db_write_overflow_sq 00:09:35.432 Waiting for AER completion... 00:09:35.432 Failure: test_invalid_db_write_overflow_sq 00:09:35.432 00:09:35.432 Executing: test_invalid_db_write_overflow_cq 00:09:35.432 Waiting for AER completion... 00:09:35.432 Failure: test_invalid_db_write_overflow_cq 00:09:35.433 00:09:35.433 04:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:35.433 04:55:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:35.433 [2024-11-21 04:55:52.052239] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:45.399 Executing: test_write_invalid_db 00:09:45.399 Waiting for AER completion... 00:09:45.399 Failure: test_write_invalid_db 00:09:45.399 00:09:45.399 Executing: test_invalid_db_write_overflow_sq 00:09:45.399 Waiting for AER completion... 00:09:45.399 Failure: test_invalid_db_write_overflow_sq 00:09:45.399 00:09:45.399 Executing: test_invalid_db_write_overflow_cq 00:09:45.399 Waiting for AER completion... 00:09:45.399 Failure: test_invalid_db_write_overflow_cq 00:09:45.399 00:09:45.399 04:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:45.399 04:56:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:45.399 [2024-11-21 04:56:02.063834] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.368 Executing: test_write_invalid_db 00:09:55.368 Waiting for AER completion... 00:09:55.368 Failure: test_write_invalid_db 00:09:55.368 00:09:55.368 Executing: test_invalid_db_write_overflow_sq 00:09:55.368 Waiting for AER completion... 00:09:55.368 Failure: test_invalid_db_write_overflow_sq 00:09:55.368 00:09:55.368 Executing: test_invalid_db_write_overflow_cq 00:09:55.368 Waiting for AER completion... 00:09:55.368 Failure: test_invalid_db_write_overflow_cq 00:09:55.368 00:09:55.368 00:09:55.368 real 0m40.182s 00:09:55.368 user 0m34.226s 00:09:55.368 sys 0m5.597s 00:09:55.368 04:56:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.368 ************************************ 00:09:55.368 END TEST nvme_doorbell_aers 00:09:55.368 ************************************ 00:09:55.368 04:56:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:55.368 04:56:11 nvme -- nvme/nvme.sh@97 -- # uname 00:09:55.368 04:56:11 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:55.368 04:56:11 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:55.368 04:56:11 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:55.368 04:56:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.368 04:56:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.368 ************************************ 00:09:55.368 START TEST nvme_multi_aen 00:09:55.368 ************************************ 00:09:55.368 04:56:11 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:55.368 [2024-11-21 04:56:12.097394] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.097454] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.097465] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.098489] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.098508] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.098515] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.099520] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.099542] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.099549] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.100493] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.100514] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.369 [2024-11-21 04:56:12.100521] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75196) is not found. Dropping the request. 00:09:55.627 Child process pid: 75722 00:09:55.627 [Child] Asynchronous Event Request test 00:09:55.627 [Child] Attached to 0000:00:10.0 00:09:55.627 [Child] Attached to 0000:00:11.0 00:09:55.627 [Child] Attached to 0000:00:13.0 00:09:55.627 [Child] Attached to 0000:00:12.0 00:09:55.627 [Child] Registering asynchronous event callbacks... 00:09:55.627 [Child] Getting orig temperature thresholds of all controllers 00:09:55.627 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.627 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.627 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.627 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.627 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:55.627 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.627 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.627 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.627 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.627 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.627 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.627 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.627 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.627 [Child] Cleaning up... 00:09:55.627 Asynchronous Event Request test 00:09:55.627 Attached to 0000:00:10.0 00:09:55.627 Attached to 0000:00:11.0 00:09:55.627 Attached to 0000:00:13.0 00:09:55.627 Attached to 0000:00:12.0 00:09:55.627 Reset controller to setup AER completions for this process 00:09:55.627 Registering asynchronous event callbacks... 00:09:55.627 Getting orig temperature thresholds of all controllers 00:09:55.627 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.628 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.628 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.628 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:55.628 Setting all controllers temperature threshold low to trigger AER 00:09:55.628 Waiting for all controllers temperature threshold to be set lower 00:09:55.628 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.628 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:55.628 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.628 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:55.628 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.628 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:55.628 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:55.628 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:55.628 Waiting for all controllers to trigger AER and reset threshold 00:09:55.628 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.628 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.628 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.628 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:55.628 Cleaning up... 00:09:55.628 00:09:55.628 real 0m0.394s 00:09:55.628 user 0m0.135s 00:09:55.628 sys 0m0.161s 00:09:55.628 04:56:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.628 04:56:12 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:55.628 ************************************ 00:09:55.628 END TEST nvme_multi_aen 00:09:55.628 ************************************ 00:09:55.888 04:56:12 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:55.888 04:56:12 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:55.888 04:56:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.888 04:56:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.888 ************************************ 00:09:55.888 START TEST nvme_startup 00:09:55.888 ************************************ 00:09:55.888 04:56:12 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:55.888 Initializing NVMe Controllers 00:09:55.888 Attached to 0000:00:10.0 00:09:55.888 Attached to 0000:00:11.0 00:09:55.888 Attached to 0000:00:13.0 00:09:55.888 Attached to 0000:00:12.0 00:09:55.888 Initialization complete. 00:09:55.888 Time used:132039.234 (us). 00:09:55.888 00:09:55.888 real 0m0.186s 00:09:55.888 user 0m0.059s 00:09:55.888 sys 0m0.083s 00:09:55.888 04:56:12 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.888 04:56:12 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:55.888 ************************************ 00:09:55.888 END TEST nvme_startup 00:09:55.888 ************************************ 00:09:55.888 04:56:12 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:55.888 04:56:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.888 04:56:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.888 04:56:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.888 ************************************ 00:09:55.888 START TEST nvme_multi_secondary 00:09:55.888 ************************************ 00:09:55.888 04:56:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:55.888 04:56:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75773 00:09:55.888 04:56:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:55.888 04:56:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75774 00:09:55.888 04:56:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:55.888 04:56:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:59.175 Initializing NVMe Controllers 00:09:59.175 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:59.175 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:59.175 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:59.175 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:59.175 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:59.175 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:59.175 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:59.175 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:59.175 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:59.175 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:59.175 Initialization complete. Launching workers. 00:09:59.175 ======================================================== 00:09:59.175 Latency(us) 00:09:59.175 Device Information : IOPS MiB/s Average min max 00:09:59.176 PCIE (0000:00:10.0) NSID 1 from core 1: 7594.60 29.67 2105.41 761.00 6035.57 00:09:59.176 PCIE (0000:00:11.0) NSID 1 from core 1: 7593.27 29.66 2106.71 781.73 5646.08 00:09:59.176 PCIE (0000:00:13.0) NSID 1 from core 1: 7592.60 29.66 2106.87 686.74 5408.34 00:09:59.176 PCIE (0000:00:12.0) NSID 1 from core 1: 7595.60 29.67 2106.04 696.57 5266.95 00:09:59.176 PCIE (0000:00:12.0) NSID 2 from core 1: 7595.60 29.67 2106.04 771.68 5651.74 00:09:59.176 PCIE (0000:00:12.0) NSID 3 from core 1: 7595.60 29.67 2106.03 775.12 5956.83 00:09:59.176 ======================================================== 00:09:59.176 Total : 45567.28 178.00 2106.18 686.74 6035.57 00:09:59.176 00:09:59.433 Initializing NVMe Controllers 00:09:59.433 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:59.433 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:59.433 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:59.433 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:59.433 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:59.433 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:59.433 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:59.433 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:59.433 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:59.433 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:59.433 Initialization complete. Launching workers. 00:09:59.433 ======================================================== 00:09:59.433 Latency(us) 00:09:59.433 Device Information : IOPS MiB/s Average min max 00:09:59.433 PCIE (0000:00:10.0) NSID 1 from core 2: 3118.32 12.18 5128.69 1097.50 13022.11 00:09:59.433 PCIE (0000:00:11.0) NSID 1 from core 2: 3118.32 12.18 5130.13 1065.21 13086.07 00:09:59.433 PCIE (0000:00:13.0) NSID 1 from core 2: 3118.32 12.18 5130.48 1105.52 12507.62 00:09:59.433 PCIE (0000:00:12.0) NSID 1 from core 2: 3118.32 12.18 5130.59 1204.15 13017.20 00:09:59.433 PCIE (0000:00:12.0) NSID 2 from core 2: 3118.32 12.18 5130.50 1212.14 13100.94 00:09:59.433 PCIE (0000:00:12.0) NSID 3 from core 2: 3118.32 12.18 5136.76 1267.34 13695.03 00:09:59.433 ======================================================== 00:09:59.433 Total : 18709.89 73.09 5131.19 1065.21 13695.03 00:09:59.433 00:09:59.433 04:56:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75773 00:10:01.334 Initializing NVMe Controllers 00:10:01.334 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:01.334 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:01.334 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:01.334 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:01.334 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:01.334 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:01.334 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:01.334 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:01.334 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:01.334 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:01.334 Initialization complete. Launching workers. 00:10:01.334 ======================================================== 00:10:01.334 Latency(us) 00:10:01.334 Device Information : IOPS MiB/s Average min max 00:10:01.334 PCIE (0000:00:10.0) NSID 1 from core 0: 10646.99 41.59 1501.56 747.12 6133.89 00:10:01.334 PCIE (0000:00:11.0) NSID 1 from core 0: 10643.59 41.58 1502.87 757.51 5703.25 00:10:01.334 PCIE (0000:00:13.0) NSID 1 from core 0: 10646.99 41.59 1502.37 610.47 5620.23 00:10:01.334 PCIE (0000:00:12.0) NSID 1 from core 0: 10648.19 41.59 1502.17 546.09 5527.40 00:10:01.334 PCIE (0000:00:12.0) NSID 2 from core 0: 10648.19 41.59 1502.15 494.70 5437.81 00:10:01.334 PCIE (0000:00:12.0) NSID 3 from core 0: 10648.19 41.59 1502.13 410.51 6081.74 00:10:01.334 ======================================================== 00:10:01.334 Total : 63882.12 249.54 1502.21 410.51 6133.89 00:10:01.334 00:10:01.334 04:56:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75774 00:10:01.334 04:56:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75843 00:10:01.334 04:56:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:01.334 04:56:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75844 00:10:01.334 04:56:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:01.334 04:56:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:04.616 Initializing NVMe Controllers 00:10:04.616 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.616 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.616 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.616 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.616 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:04.616 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:04.616 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:04.616 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:04.616 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:04.616 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:04.616 Initialization complete. Launching workers. 00:10:04.616 ======================================================== 00:10:04.616 Latency(us) 00:10:04.616 Device Information : IOPS MiB/s Average min max 00:10:04.616 PCIE (0000:00:10.0) NSID 1 from core 1: 7516.11 29.36 2127.43 774.38 6459.29 00:10:04.616 PCIE (0000:00:11.0) NSID 1 from core 1: 7516.11 29.36 2128.54 789.11 6560.93 00:10:04.616 PCIE (0000:00:13.0) NSID 1 from core 1: 7516.11 29.36 2128.52 789.91 6235.47 00:10:04.616 PCIE (0000:00:12.0) NSID 1 from core 1: 7516.11 29.36 2128.46 797.81 5645.22 00:10:04.616 PCIE (0000:00:12.0) NSID 2 from core 1: 7516.11 29.36 2128.47 787.47 6116.36 00:10:04.616 PCIE (0000:00:12.0) NSID 3 from core 1: 7516.11 29.36 2128.40 802.41 7045.55 00:10:04.616 ======================================================== 00:10:04.616 Total : 45096.64 176.16 2128.30 774.38 7045.55 00:10:04.616 00:10:04.616 Initializing NVMe Controllers 00:10:04.616 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.616 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.616 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.616 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.616 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:04.616 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:04.616 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:04.616 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:04.616 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:04.616 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:04.616 Initialization complete. Launching workers. 00:10:04.616 ======================================================== 00:10:04.616 Latency(us) 00:10:04.616 Device Information : IOPS MiB/s Average min max 00:10:04.616 PCIE (0000:00:10.0) NSID 1 from core 0: 7409.12 28.94 2158.08 760.77 6007.37 00:10:04.616 PCIE (0000:00:11.0) NSID 1 from core 0: 7409.12 28.94 2159.07 783.81 5586.56 00:10:04.616 PCIE (0000:00:13.0) NSID 1 from core 0: 7409.12 28.94 2159.03 793.33 5622.08 00:10:04.616 PCIE (0000:00:12.0) NSID 1 from core 0: 7409.12 28.94 2159.05 788.36 5389.56 00:10:04.616 PCIE (0000:00:12.0) NSID 2 from core 0: 7409.12 28.94 2159.00 782.89 5784.04 00:10:04.616 PCIE (0000:00:12.0) NSID 3 from core 0: 7409.12 28.94 2158.97 787.67 6066.81 00:10:04.616 ======================================================== 00:10:04.616 Total : 44454.73 173.65 2158.87 760.77 6066.81 00:10:04.616 00:10:06.557 Initializing NVMe Controllers 00:10:06.557 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:06.557 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:06.557 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:06.557 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:06.557 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:06.557 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:06.557 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:06.557 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:06.557 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:06.557 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:06.557 Initialization complete. Launching workers. 00:10:06.557 ======================================================== 00:10:06.557 Latency(us) 00:10:06.557 Device Information : IOPS MiB/s Average min max 00:10:06.557 PCIE (0000:00:10.0) NSID 1 from core 2: 4495.05 17.56 3557.47 782.98 12830.21 00:10:06.557 PCIE (0000:00:11.0) NSID 1 from core 2: 4495.05 17.56 3559.16 787.24 12747.79 00:10:06.557 PCIE (0000:00:13.0) NSID 1 from core 2: 4495.05 17.56 3558.95 808.86 13099.74 00:10:06.557 PCIE (0000:00:12.0) NSID 1 from core 2: 4495.05 17.56 3559.10 693.84 12136.00 00:10:06.557 PCIE (0000:00:12.0) NSID 2 from core 2: 4495.05 17.56 3559.06 599.64 12715.75 00:10:06.557 PCIE (0000:00:12.0) NSID 3 from core 2: 4495.05 17.56 3558.84 492.31 12934.69 00:10:06.557 ======================================================== 00:10:06.557 Total : 26970.29 105.35 3558.76 492.31 13099.74 00:10:06.557 00:10:06.557 04:56:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75843 00:10:06.557 04:56:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75844 00:10:06.557 00:10:06.557 real 0m10.530s 00:10:06.557 user 0m18.342s 00:10:06.557 sys 0m0.618s 00:10:06.557 04:56:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.557 ************************************ 00:10:06.557 END TEST nvme_multi_secondary 00:10:06.557 ************************************ 00:10:06.557 04:56:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:06.557 04:56:23 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:06.557 04:56:23 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74798 ]] 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@1094 -- # kill 74798 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@1095 -- # wait 74798 00:10:06.557 [2024-11-21 04:56:23.175125] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.175190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.175204] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.175217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.175790] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.175825] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.175837] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.175850] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.176470] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.176510] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.176522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.176537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.177131] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.177170] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.177182] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 [2024-11-21 04:56:23.177195] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75721) is not found. Dropping the request. 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:10:06.557 04:56:23 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.557 04:56:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.557 ************************************ 00:10:06.557 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:06.557 ************************************ 00:10:06.557 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:06.817 * Looking for test storage... 00:10:06.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.817 --rc genhtml_branch_coverage=1 00:10:06.817 --rc genhtml_function_coverage=1 00:10:06.817 --rc genhtml_legend=1 00:10:06.817 --rc geninfo_all_blocks=1 00:10:06.817 --rc geninfo_unexecuted_blocks=1 00:10:06.817 00:10:06.817 ' 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.817 --rc genhtml_branch_coverage=1 00:10:06.817 --rc genhtml_function_coverage=1 00:10:06.817 --rc genhtml_legend=1 00:10:06.817 --rc geninfo_all_blocks=1 00:10:06.817 --rc geninfo_unexecuted_blocks=1 00:10:06.817 00:10:06.817 ' 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.817 --rc genhtml_branch_coverage=1 00:10:06.817 --rc genhtml_function_coverage=1 00:10:06.817 --rc genhtml_legend=1 00:10:06.817 --rc geninfo_all_blocks=1 00:10:06.817 --rc geninfo_unexecuted_blocks=1 00:10:06.817 00:10:06.817 ' 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.817 --rc genhtml_branch_coverage=1 00:10:06.817 --rc genhtml_function_coverage=1 00:10:06.817 --rc genhtml_legend=1 00:10:06.817 --rc geninfo_all_blocks=1 00:10:06.817 --rc geninfo_unexecuted_blocks=1 00:10:06.817 00:10:06.817 ' 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:06.817 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76006 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76006 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 76006 ']' 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.818 04:56:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:06.818 [2024-11-21 04:56:23.545820] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:06.818 [2024-11-21 04:56:23.545942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:10:07.078 [2024-11-21 04:56:23.715670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.078 [2024-11-21 04:56:23.743793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.078 [2024-11-21 04:56:23.744073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.078 [2024-11-21 04:56:23.744312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.078 [2024-11-21 04:56:23.744449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.648 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.648 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:10:07.648 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:07.648 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.648 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:07.910 nvme0n1 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_oDjTY.txt 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:07.910 true 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732164984 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76029 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:07.910 04:56:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:09.823 [2024-11-21 04:56:26.469871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:09.823 [2024-11-21 04:56:26.470335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:09.823 [2024-11-21 04:56:26.470368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:09.823 [2024-11-21 04:56:26.470391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.823 [2024-11-21 04:56:26.471824] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.823 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76029 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76029 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76029 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_oDjTY.txt 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_oDjTY.txt 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76006 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 76006 ']' 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 76006 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:09.823 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.082 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76006 00:10:10.082 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.082 killing process with pid 76006 00:10:10.082 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.082 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76006' 00:10:10.082 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 76006 00:10:10.082 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 76006 00:10:10.344 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:10.344 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:10.344 00:10:10.344 real 0m3.613s 00:10:10.344 user 0m12.799s 00:10:10.344 sys 0m0.526s 00:10:10.344 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.344 04:56:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:10.344 ************************************ 00:10:10.344 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:10.344 ************************************ 00:10:10.344 04:56:26 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:10.344 04:56:26 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:10.344 04:56:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.344 04:56:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.344 04:56:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.344 ************************************ 00:10:10.344 START TEST nvme_fio 00:10:10.344 ************************************ 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:10.344 04:56:26 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:10.344 04:56:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:10.605 04:56:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:10.605 04:56:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:10.867 04:56:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.867 04:56:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.867 04:56:27 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:11.128 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:11.128 fio-3.35 00:10:11.128 Starting 1 thread 00:10:14.428 00:10:14.428 test: (groupid=0, jobs=1): err= 0: pid=76152: Thu Nov 21 04:56:30 2024 00:10:14.428 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(96.7MiB/2035msec) 00:10:14.428 slat (usec): min=3, max=110, avg= 6.17, stdev= 3.11 00:10:14.428 clat (usec): min=818, max=41014, avg=3752.82, stdev=2003.90 00:10:14.428 lat (usec): min=822, max=41019, avg=3758.99, stdev=2004.44 00:10:14.428 clat percentiles (usec): 00:10:14.428 | 1.00th=[ 1254], 5.00th=[ 1893], 10.00th=[ 2343], 20.00th=[ 2606], 00:10:14.428 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 3097], 60.00th=[ 3392], 00:10:14.428 | 70.00th=[ 4015], 80.00th=[ 4883], 90.00th=[ 6128], 95.00th=[ 7177], 00:10:14.428 | 99.00th=[ 9896], 99.50th=[10814], 99.90th=[16057], 99.95th=[37487], 00:10:14.428 | 99.99th=[41157] 00:10:14.428 bw ( KiB/s): min=27712, max=71096, per=100.00%, avg=49442.00, stdev=23540.63, samples=4 00:10:14.428 iops : min= 6928, max=17774, avg=12360.50, stdev=5885.16, samples=4 00:10:14.428 write: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(96.5MiB/2035msec); 0 zone resets 00:10:14.428 slat (nsec): min=4073, max=70002, avg=6470.57, stdev=2924.11 00:10:14.428 clat (usec): min=906, max=82429, avg=6759.58, stdev=9443.12 00:10:14.428 lat (usec): min=910, max=82435, avg=6766.05, stdev=9443.22 00:10:14.428 clat percentiles (usec): 00:10:14.428 | 1.00th=[ 1418], 5.00th=[ 2212], 10.00th=[ 2507], 20.00th=[ 2704], 00:10:14.428 | 30.00th=[ 2835], 40.00th=[ 2999], 50.00th=[ 3228], 60.00th=[ 3752], 00:10:14.428 | 70.00th=[ 4686], 80.00th=[ 6063], 90.00th=[18220], 95.00th=[32375], 00:10:14.428 | 99.00th=[45351], 99.50th=[47973], 99.90th=[68682], 99.95th=[76022], 00:10:14.428 | 99.99th=[80217] 00:10:14.428 bw ( KiB/s): min=27336, max=71160, per=100.00%, avg=49192.00, stdev=23761.89, samples=4 00:10:14.428 iops : min= 6834, max=17790, avg=12298.00, stdev=5940.47, samples=4 00:10:14.428 lat (usec) : 1000=0.08% 00:10:14.428 lat (msec) : 2=4.77%, 4=61.54%, 10=27.26%, 20=1.61%, 50=4.57% 00:10:14.428 lat (msec) : 100=0.17% 00:10:14.428 cpu : usr=99.16%, sys=0.10%, ctx=6, majf=0, minf=627 00:10:14.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:14.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.428 issued rwts: total=24744,24700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.428 00:10:14.428 Run status group 0 (all jobs): 00:10:14.428 READ: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=96.7MiB (101MB), run=2035-2035msec 00:10:14.428 WRITE: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=96.5MiB (101MB), run=2035-2035msec 00:10:14.428 ----------------------------------------------------- 00:10:14.428 Suppressions used: 00:10:14.428 count bytes template 00:10:14.428 1 32 /usr/src/fio/parse.c 00:10:14.428 1 8 libtcmalloc_minimal.so 00:10:14.428 ----------------------------------------------------- 00:10:14.428 00:10:14.428 04:56:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:14.428 04:56:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:14.428 04:56:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:14.428 04:56:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:14.428 04:56:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:14.428 04:56:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:14.689 04:56:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:14.689 04:56:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:14.689 04:56:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:14.950 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:14.950 fio-3.35 00:10:14.950 Starting 1 thread 00:10:21.575 00:10:21.575 test: (groupid=0, jobs=1): err= 0: pid=76213: Thu Nov 21 04:56:37 2024 00:10:21.575 read: IOPS=20.1k, BW=78.4MiB/s (82.2MB/s)(157MiB/2001msec) 00:10:21.575 slat (nsec): min=3935, max=89905, avg=5827.99, stdev=2510.50 00:10:21.575 clat (usec): min=438, max=10678, avg=3177.39, stdev=929.29 00:10:21.575 lat (usec): min=448, max=10745, avg=3183.21, stdev=930.47 00:10:21.575 clat percentiles (usec): 00:10:21.575 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:10:21.575 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2966], 00:10:21.575 | 70.00th=[ 3130], 80.00th=[ 3523], 90.00th=[ 4490], 95.00th=[ 5276], 00:10:21.575 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 9765], 00:10:21.575 | 99.99th=[10552] 00:10:21.575 bw ( KiB/s): min=72672, max=85880, per=100.00%, avg=81298.67, stdev=7475.72, samples=3 00:10:21.575 iops : min=18168, max=21470, avg=20324.67, stdev=1868.93, samples=3 00:10:21.575 write: IOPS=20.0k, BW=78.2MiB/s (82.0MB/s)(157MiB/2001msec); 0 zone resets 00:10:21.575 slat (nsec): min=4011, max=73685, avg=6072.13, stdev=2515.70 00:10:21.575 clat (usec): min=473, max=10606, avg=3185.77, stdev=926.43 00:10:21.575 lat (usec): min=482, max=10625, avg=3191.85, stdev=927.64 00:10:21.575 clat percentiles (usec): 00:10:21.575 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:10:21.575 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2966], 00:10:21.575 | 70.00th=[ 3130], 80.00th=[ 3523], 90.00th=[ 4490], 95.00th=[ 5276], 00:10:21.575 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7898], 99.95th=[ 9765], 00:10:21.575 | 99.99th=[10421] 00:10:21.575 bw ( KiB/s): min=72784, max=86016, per=100.00%, avg=81434.67, stdev=7496.07, samples=3 00:10:21.575 iops : min=18196, max=21504, avg=20358.67, stdev=1874.02, samples=3 00:10:21.575 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:21.575 lat (msec) : 2=0.48%, 4=85.53%, 10=13.92%, 20=0.04% 00:10:21.575 cpu : usr=99.00%, sys=0.15%, ctx=5, majf=0, minf=627 00:10:21.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:21.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.575 issued rwts: total=40160,40067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.575 00:10:21.575 Run status group 0 (all jobs): 00:10:21.575 READ: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=157MiB (164MB), run=2001-2001msec 00:10:21.575 WRITE: bw=78.2MiB/s (82.0MB/s), 78.2MiB/s-78.2MiB/s (82.0MB/s-82.0MB/s), io=157MiB (164MB), run=2001-2001msec 00:10:21.575 ----------------------------------------------------- 00:10:21.575 Suppressions used: 00:10:21.575 count bytes template 00:10:21.575 1 32 /usr/src/fio/parse.c 00:10:21.575 1 8 libtcmalloc_minimal.so 00:10:21.575 ----------------------------------------------------- 00:10:21.575 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:21.575 04:56:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:21.575 04:56:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:21.575 04:56:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:21.575 04:56:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:21.575 04:56:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:21.575 04:56:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:21.575 04:56:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:21.575 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:21.575 fio-3.35 00:10:21.575 Starting 1 thread 00:10:28.138 00:10:28.138 test: (groupid=0, jobs=1): err= 0: pid=76268: Thu Nov 21 04:56:44 2024 00:10:28.138 read: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(169MiB/2001msec) 00:10:28.138 slat (nsec): min=3430, max=50410, avg=5320.32, stdev=2572.76 00:10:28.138 clat (usec): min=318, max=10168, avg=2960.54, stdev=889.30 00:10:28.138 lat (usec): min=323, max=10215, avg=2965.86, stdev=890.90 00:10:28.138 clat percentiles (usec): 00:10:28.138 | 1.00th=[ 1958], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2573], 00:10:28.138 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2638], 60.00th=[ 2671], 00:10:28.138 | 70.00th=[ 2737], 80.00th=[ 2933], 90.00th=[ 4080], 95.00th=[ 5276], 00:10:28.138 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 8094], 99.95th=[ 8455], 00:10:28.138 | 99.99th=[ 9896] 00:10:28.138 bw ( KiB/s): min=84472, max=89416, per=100.00%, avg=86514.67, stdev=2581.43, samples=3 00:10:28.138 iops : min=21118, max=22354, avg=21628.67, stdev=645.36, samples=3 00:10:28.138 write: IOPS=21.4k, BW=83.7MiB/s (87.8MB/s)(168MiB/2001msec); 0 zone resets 00:10:28.138 slat (nsec): min=3540, max=85229, avg=5734.73, stdev=2579.18 00:10:28.138 clat (usec): min=220, max=9914, avg=2966.85, stdev=898.25 00:10:28.138 lat (usec): min=225, max=9933, avg=2972.58, stdev=899.84 00:10:28.138 clat percentiles (usec): 00:10:28.138 | 1.00th=[ 1958], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:10:28.138 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:10:28.138 | 70.00th=[ 2737], 80.00th=[ 2933], 90.00th=[ 4080], 95.00th=[ 5276], 00:10:28.138 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 8225], 99.95th=[ 8717], 00:10:28.138 | 99.99th=[ 9765] 00:10:28.138 bw ( KiB/s): min=84544, max=90160, per=100.00%, avg=86725.33, stdev=3010.48, samples=3 00:10:28.138 iops : min=21136, max=22540, avg=21681.33, stdev=752.62, samples=3 00:10:28.138 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:28.138 lat (msec) : 2=1.11%, 4=88.55%, 10=10.31%, 20=0.01% 00:10:28.138 cpu : usr=99.20%, sys=0.00%, ctx=3, majf=0, minf=627 00:10:28.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:28.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.138 issued rwts: total=43223,42899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.138 00:10:28.138 Run status group 0 (all jobs): 00:10:28.138 READ: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=169MiB (177MB), run=2001-2001msec 00:10:28.138 WRITE: bw=83.7MiB/s (87.8MB/s), 83.7MiB/s-83.7MiB/s (87.8MB/s-87.8MB/s), io=168MiB (176MB), run=2001-2001msec 00:10:28.138 ----------------------------------------------------- 00:10:28.138 Suppressions used: 00:10:28.138 count bytes template 00:10:28.138 1 32 /usr/src/fio/parse.c 00:10:28.138 1 8 libtcmalloc_minimal.so 00:10:28.138 ----------------------------------------------------- 00:10:28.138 00:10:28.138 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:28.138 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:28.138 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:28.138 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:28.397 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:28.397 04:56:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:28.656 04:56:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:28.656 04:56:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:28.656 04:56:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:28.656 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:28.656 fio-3.35 00:10:28.656 Starting 1 thread 00:10:35.234 00:10:35.234 test: (groupid=0, jobs=1): err= 0: pid=76329: Thu Nov 21 04:56:50 2024 00:10:35.234 read: IOPS=21.2k, BW=82.7MiB/s (86.8MB/s)(166MiB/2001msec) 00:10:35.234 slat (nsec): min=3893, max=61358, avg=5651.26, stdev=2472.60 00:10:35.234 clat (usec): min=412, max=8521, avg=3012.42, stdev=995.04 00:10:35.234 lat (usec): min=417, max=8533, avg=3018.07, stdev=996.26 00:10:35.234 clat percentiles (usec): 00:10:35.234 | 1.00th=[ 1500], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2409], 00:10:35.234 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2802], 00:10:35.234 | 70.00th=[ 2966], 80.00th=[ 3425], 90.00th=[ 4490], 95.00th=[ 5276], 00:10:35.234 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7898], 99.95th=[ 8094], 00:10:35.234 | 99.99th=[ 8356] 00:10:35.234 bw ( KiB/s): min=78152, max=94616, per=100.00%, avg=85341.33, stdev=8427.77, samples=3 00:10:35.234 iops : min=19538, max=23654, avg=21335.33, stdev=2106.94, samples=3 00:10:35.234 write: IOPS=21.0k, BW=82.2MiB/s (86.2MB/s)(164MiB/2001msec); 0 zone resets 00:10:35.234 slat (usec): min=3, max=247, avg= 5.94, stdev= 2.85 00:10:35.234 clat (usec): min=354, max=8529, avg=3031.96, stdev=1004.00 00:10:35.234 lat (usec): min=360, max=8542, avg=3037.90, stdev=1005.24 00:10:35.234 clat percentiles (usec): 00:10:35.234 | 1.00th=[ 1500], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2409], 00:10:35.234 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:10:35.234 | 70.00th=[ 2999], 80.00th=[ 3490], 90.00th=[ 4555], 95.00th=[ 5342], 00:10:35.234 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7898], 99.95th=[ 8094], 00:10:35.234 | 99.99th=[ 8291] 00:10:35.234 bw ( KiB/s): min=77944, max=94520, per=100.00%, avg=85509.33, stdev=8381.99, samples=3 00:10:35.234 iops : min=19486, max=23630, avg=21377.33, stdev=2095.50, samples=3 00:10:35.234 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.08% 00:10:35.234 lat (msec) : 2=2.68%, 4=82.80%, 10=14.43% 00:10:35.234 cpu : usr=98.90%, sys=0.25%, ctx=15, majf=0, minf=625 00:10:35.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.234 issued rwts: total=42384,42099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.234 00:10:35.234 Run status group 0 (all jobs): 00:10:35.234 READ: bw=82.7MiB/s (86.8MB/s), 82.7MiB/s-82.7MiB/s (86.8MB/s-86.8MB/s), io=166MiB (174MB), run=2001-2001msec 00:10:35.234 WRITE: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=164MiB (172MB), run=2001-2001msec 00:10:35.234 ----------------------------------------------------- 00:10:35.234 Suppressions used: 00:10:35.234 count bytes template 00:10:35.234 1 32 /usr/src/fio/parse.c 00:10:35.234 1 8 libtcmalloc_minimal.so 00:10:35.234 ----------------------------------------------------- 00:10:35.234 00:10:35.234 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:35.234 04:56:50 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:35.234 00:10:35.234 real 0m23.973s 00:10:35.234 user 0m15.382s 00:10:35.234 sys 0m14.755s 00:10:35.234 ************************************ 00:10:35.234 END TEST nvme_fio 00:10:35.234 ************************************ 00:10:35.234 04:56:50 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.234 04:56:50 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:35.234 00:10:35.234 real 1m32.589s 00:10:35.234 user 3m32.258s 00:10:35.234 sys 0m25.343s 00:10:35.234 04:56:50 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.234 04:56:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:35.234 ************************************ 00:10:35.234 END TEST nvme 00:10:35.234 ************************************ 00:10:35.234 04:56:50 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:35.234 04:56:50 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:35.234 04:56:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:35.234 04:56:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.234 04:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:35.234 ************************************ 00:10:35.234 START TEST nvme_scc 00:10:35.234 ************************************ 00:10:35.234 04:56:51 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:35.234 * Looking for test storage... 00:10:35.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:35.234 04:56:51 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:35.234 04:56:51 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:35.234 04:56:51 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:35.234 04:56:51 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:35.234 04:56:51 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.234 04:56:51 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.234 04:56:51 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.234 04:56:51 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:35.235 04:56:51 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.235 04:56:51 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:35.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.235 --rc genhtml_branch_coverage=1 00:10:35.235 --rc genhtml_function_coverage=1 00:10:35.235 --rc genhtml_legend=1 00:10:35.235 --rc geninfo_all_blocks=1 00:10:35.235 --rc geninfo_unexecuted_blocks=1 00:10:35.235 00:10:35.235 ' 00:10:35.235 04:56:51 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:35.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.235 --rc genhtml_branch_coverage=1 00:10:35.235 --rc genhtml_function_coverage=1 00:10:35.235 --rc genhtml_legend=1 00:10:35.235 --rc geninfo_all_blocks=1 00:10:35.235 --rc geninfo_unexecuted_blocks=1 00:10:35.235 00:10:35.235 ' 00:10:35.235 04:56:51 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:35.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.235 --rc genhtml_branch_coverage=1 00:10:35.235 --rc genhtml_function_coverage=1 00:10:35.235 --rc genhtml_legend=1 00:10:35.235 --rc geninfo_all_blocks=1 00:10:35.235 --rc geninfo_unexecuted_blocks=1 00:10:35.235 00:10:35.235 ' 00:10:35.235 04:56:51 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:35.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.235 --rc genhtml_branch_coverage=1 00:10:35.235 --rc genhtml_function_coverage=1 00:10:35.235 --rc genhtml_legend=1 00:10:35.235 --rc geninfo_all_blocks=1 00:10:35.235 --rc geninfo_unexecuted_blocks=1 00:10:35.235 00:10:35.235 ' 00:10:35.235 04:56:51 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.235 04:56:51 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.235 04:56:51 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.235 04:56:51 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.235 04:56:51 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.235 04:56:51 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:35.235 04:56:51 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:35.235 04:56:51 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:35.235 04:56:51 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:35.235 04:56:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:35.235 04:56:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:35.235 04:56:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:35.235 04:56:51 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:35.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.235 Waiting for block devices as requested 00:10:35.235 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.235 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.235 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.235 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.535 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:40.535 04:56:56 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:40.535 04:56:56 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:40.535 04:56:56 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:40.535 04:56:56 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.535 04:56:56 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:56 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:40.535 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:40.536 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.537 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.538 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.539 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:40.540 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:40.541 04:56:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:40.541 04:56:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:40.541 04:56:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.541 04:56:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.541 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.542 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.543 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.544 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.545 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.546 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:40.547 04:56:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:40.547 04:56:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:40.547 04:56:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.547 04:56:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.547 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:40.548 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.549 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.550 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.551 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.552 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:40.553 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:40.554 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.555 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.821 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.822 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.823 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:40.824 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:40.825 04:56:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:40.825 04:56:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:40.825 04:56:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:40.825 04:56:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.825 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.826 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.827 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:40.828 04:56:57 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:40.828 04:56:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:40.829 04:56:57 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:40.829 04:56:57 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:40.829 04:56:57 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:40.829 04:56:57 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:41.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.033 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.033 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.033 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.033 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.033 04:56:58 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:42.033 04:56:58 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.033 04:56:58 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.033 04:56:58 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:42.033 ************************************ 00:10:42.033 START TEST nvme_simple_copy 00:10:42.033 ************************************ 00:10:42.033 04:56:58 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:42.294 Initializing NVMe Controllers 00:10:42.294 Attaching to 0000:00:10.0 00:10:42.294 Controller supports SCC. Attached to 0000:00:10.0 00:10:42.294 Namespace ID: 1 size: 6GB 00:10:42.294 Initialization complete. 00:10:42.294 00:10:42.294 Controller QEMU NVMe Ctrl (12340 ) 00:10:42.294 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:42.294 Namespace Block Size:4096 00:10:42.294 Writing LBAs 0 to 63 with Random Data 00:10:42.294 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:42.294 LBAs matching Written Data: 64 00:10:42.294 00:10:42.294 real 0m0.271s 00:10:42.294 user 0m0.098s 00:10:42.294 sys 0m0.071s 00:10:42.294 04:56:58 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.294 ************************************ 00:10:42.294 END TEST nvme_simple_copy 00:10:42.294 ************************************ 00:10:42.294 04:56:58 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 00:10:42.294 real 0m7.854s 00:10:42.294 user 0m1.123s 00:10:42.294 sys 0m1.495s 00:10:42.294 04:56:58 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.294 ************************************ 00:10:42.294 04:56:58 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 END TEST nvme_scc 00:10:42.294 ************************************ 00:10:42.294 04:56:58 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:42.294 04:56:58 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:42.294 04:56:58 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:42.294 04:56:58 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:42.294 04:56:58 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:42.294 04:56:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.294 04:56:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.294 04:56:58 -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 ************************************ 00:10:42.294 START TEST nvme_fdp 00:10:42.294 ************************************ 00:10:42.294 04:56:58 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:42.294 * Looking for test storage... 00:10:42.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.294 04:56:58 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.294 04:56:58 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.294 04:56:58 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.555 04:56:59 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:42.555 04:56:59 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.555 04:56:59 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.555 --rc genhtml_branch_coverage=1 00:10:42.555 --rc genhtml_function_coverage=1 00:10:42.555 --rc genhtml_legend=1 00:10:42.555 --rc geninfo_all_blocks=1 00:10:42.555 --rc geninfo_unexecuted_blocks=1 00:10:42.555 00:10:42.555 ' 00:10:42.555 04:56:59 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.555 --rc genhtml_branch_coverage=1 00:10:42.555 --rc genhtml_function_coverage=1 00:10:42.555 --rc genhtml_legend=1 00:10:42.555 --rc geninfo_all_blocks=1 00:10:42.555 --rc geninfo_unexecuted_blocks=1 00:10:42.555 00:10:42.555 ' 00:10:42.555 04:56:59 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.555 --rc genhtml_branch_coverage=1 00:10:42.555 --rc genhtml_function_coverage=1 00:10:42.555 --rc genhtml_legend=1 00:10:42.555 --rc geninfo_all_blocks=1 00:10:42.555 --rc geninfo_unexecuted_blocks=1 00:10:42.555 00:10:42.555 ' 00:10:42.555 04:56:59 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.555 --rc genhtml_branch_coverage=1 00:10:42.555 --rc genhtml_function_coverage=1 00:10:42.555 --rc genhtml_legend=1 00:10:42.555 --rc geninfo_all_blocks=1 00:10:42.555 --rc geninfo_unexecuted_blocks=1 00:10:42.555 00:10:42.555 ' 00:10:42.555 04:56:59 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.555 04:56:59 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.555 04:56:59 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.555 04:56:59 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.555 04:56:59 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.555 04:56:59 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:42.555 04:56:59 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:42.555 04:56:59 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:42.555 04:56:59 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.555 04:56:59 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:42.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.815 Waiting for block devices as requested 00:10:43.076 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.076 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.076 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.337 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.637 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:48.637 04:57:04 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:48.637 04:57:04 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:48.637 04:57:04 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:48.637 04:57:04 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:48.637 04:57:04 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.637 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:48.638 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:48.639 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.640 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.641 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.642 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:48.643 04:57:04 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:48.643 04:57:04 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:48.643 04:57:04 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:48.643 04:57:04 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.643 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.644 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.645 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:48.646 04:57:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:48.646 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.647 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.648 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:48.649 04:57:05 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:48.650 04:57:05 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:48.650 04:57:05 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:48.650 04:57:05 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:48.650 04:57:05 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.650 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.651 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:48.652 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.653 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:48.654 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.655 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.656 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.657 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:48.658 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.659 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:48.660 04:57:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.661 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:48.662 04:57:05 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:48.662 04:57:05 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:48.662 04:57:05 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:48.662 04:57:05 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:48.662 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.663 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.664 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:48.665 04:57:05 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:48.665 04:57:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:48.666 04:57:05 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:48.666 04:57:05 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:48.666 04:57:05 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:48.666 04:57:05 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:49.490 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.490 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.490 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.490 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:49.749 04:57:06 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:49.749 04:57:06 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:49.749 04:57:06 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.749 04:57:06 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:49.749 ************************************ 00:10:49.749 START TEST nvme_flexible_data_placement 00:10:49.749 ************************************ 00:10:49.749 04:57:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:49.749 Initializing NVMe Controllers 00:10:49.749 Attaching to 0000:00:13.0 00:10:49.749 Controller supports FDP Attached to 0000:00:13.0 00:10:49.749 Namespace ID: 1 Endurance Group ID: 1 00:10:49.749 Initialization complete. 00:10:49.749 00:10:49.749 ================================== 00:10:49.749 == FDP tests for Namespace: #01 == 00:10:49.749 ================================== 00:10:49.749 00:10:49.749 Get Feature: FDP: 00:10:49.749 ================= 00:10:49.749 Enabled: Yes 00:10:49.749 FDP configuration Index: 0 00:10:49.749 00:10:49.749 FDP configurations log page 00:10:49.749 =========================== 00:10:49.749 Number of FDP configurations: 1 00:10:49.749 Version: 0 00:10:49.749 Size: 112 00:10:49.749 FDP Configuration Descriptor: 0 00:10:49.749 Descriptor Size: 96 00:10:49.749 Reclaim Group Identifier format: 2 00:10:49.749 FDP Volatile Write Cache: Not Present 00:10:49.749 FDP Configuration: Valid 00:10:49.749 Vendor Specific Size: 0 00:10:49.749 Number of Reclaim Groups: 2 00:10:49.749 Number of Recalim Unit Handles: 8 00:10:49.749 Max Placement Identifiers: 128 00:10:49.749 Number of Namespaces Suppprted: 256 00:10:49.749 Reclaim unit Nominal Size: 6000000 bytes 00:10:49.749 Estimated Reclaim Unit Time Limit: Not Reported 00:10:49.749 RUH Desc #000: RUH Type: Initially Isolated 00:10:49.749 RUH Desc #001: RUH Type: Initially Isolated 00:10:49.749 RUH Desc #002: RUH Type: Initially Isolated 00:10:49.749 RUH Desc #003: RUH Type: Initially Isolated 00:10:49.749 RUH Desc #004: RUH Type: Initially Isolated 00:10:49.749 RUH Desc #005: RUH Type: Initially Isolated 00:10:49.749 RUH Desc #006: RUH Type: Initially Isolated 00:10:49.749 RUH Desc #007: RUH Type: Initially Isolated 00:10:49.749 00:10:49.749 FDP reclaim unit handle usage log page 00:10:49.749 ====================================== 00:10:49.749 Number of Reclaim Unit Handles: 8 00:10:49.749 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:49.749 RUH Usage Desc #001: RUH Attributes: Unused 00:10:49.749 RUH Usage Desc #002: RUH Attributes: Unused 00:10:49.749 RUH Usage Desc #003: RUH Attributes: Unused 00:10:49.749 RUH Usage Desc #004: RUH Attributes: Unused 00:10:49.749 RUH Usage Desc #005: RUH Attributes: Unused 00:10:49.749 RUH Usage Desc #006: RUH Attributes: Unused 00:10:49.749 RUH Usage Desc #007: RUH Attributes: Unused 00:10:49.749 00:10:49.749 FDP statistics log page 00:10:49.749 ======================= 00:10:49.749 Host bytes with metadata written: 1965219840 00:10:49.749 Media bytes with metadata written: 1965506560 00:10:49.749 Media bytes erased: 0 00:10:49.749 00:10:49.749 FDP Reclaim unit handle status 00:10:49.749 ============================== 00:10:49.749 Number of RUHS descriptors: 2 00:10:49.749 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002dd2 00:10:49.749 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:49.749 00:10:49.749 FDP write on placement id: 0 success 00:10:49.749 00:10:49.749 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:49.749 00:10:49.749 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:49.749 00:10:49.749 Get Feature: FDP Events for Placement handle: #0 00:10:49.749 ======================== 00:10:49.749 Number of FDP Events: 6 00:10:49.749 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:49.749 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:49.749 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:49.749 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:49.749 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:49.749 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:49.749 00:10:49.749 FDP events log page 00:10:49.749 =================== 00:10:49.749 Number of FDP events: 1 00:10:49.749 FDP Event #0: 00:10:49.749 Event Type: RU Not Written to Capacity 00:10:49.749 Placement Identifier: Valid 00:10:49.749 NSID: Valid 00:10:49.749 Location: Valid 00:10:49.749 Placement Identifier: 0 00:10:49.749 Event Timestamp: 2 00:10:49.749 Namespace Identifier: 1 00:10:49.749 Reclaim Group Identifier: 0 00:10:49.749 Reclaim Unit Handle Identifier: 0 00:10:49.749 00:10:49.749 FDP test passed 00:10:49.749 00:10:49.749 real 0m0.221s 00:10:49.749 user 0m0.059s 00:10:49.749 sys 0m0.062s 00:10:49.749 04:57:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.749 04:57:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:49.749 ************************************ 00:10:49.749 END TEST nvme_flexible_data_placement 00:10:49.749 ************************************ 00:10:50.008 00:10:50.008 real 0m7.595s 00:10:50.008 user 0m1.040s 00:10:50.008 sys 0m1.446s 00:10:50.008 04:57:06 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.008 04:57:06 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:50.008 ************************************ 00:10:50.008 END TEST nvme_fdp 00:10:50.008 ************************************ 00:10:50.008 04:57:06 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:50.008 04:57:06 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:50.008 04:57:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.008 04:57:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.008 04:57:06 -- common/autotest_common.sh@10 -- # set +x 00:10:50.008 ************************************ 00:10:50.008 START TEST nvme_rpc 00:10:50.008 ************************************ 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:50.008 * Looking for test storage... 00:10:50.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.008 04:57:06 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.008 --rc genhtml_branch_coverage=1 00:10:50.008 --rc genhtml_function_coverage=1 00:10:50.008 --rc genhtml_legend=1 00:10:50.008 --rc geninfo_all_blocks=1 00:10:50.008 --rc geninfo_unexecuted_blocks=1 00:10:50.008 00:10:50.008 ' 00:10:50.008 04:57:06 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.008 --rc genhtml_branch_coverage=1 00:10:50.008 --rc genhtml_function_coverage=1 00:10:50.008 --rc genhtml_legend=1 00:10:50.008 --rc geninfo_all_blocks=1 00:10:50.009 --rc geninfo_unexecuted_blocks=1 00:10:50.009 00:10:50.009 ' 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.009 --rc genhtml_branch_coverage=1 00:10:50.009 --rc genhtml_function_coverage=1 00:10:50.009 --rc genhtml_legend=1 00:10:50.009 --rc geninfo_all_blocks=1 00:10:50.009 --rc geninfo_unexecuted_blocks=1 00:10:50.009 00:10:50.009 ' 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.009 --rc genhtml_branch_coverage=1 00:10:50.009 --rc genhtml_function_coverage=1 00:10:50.009 --rc genhtml_legend=1 00:10:50.009 --rc geninfo_all_blocks=1 00:10:50.009 --rc geninfo_unexecuted_blocks=1 00:10:50.009 00:10:50.009 ' 00:10:50.009 04:57:06 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.009 04:57:06 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:50.009 04:57:06 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:50.268 04:57:06 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:50.269 04:57:06 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77710 00:10:50.269 04:57:06 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:50.269 04:57:06 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:50.269 04:57:06 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77710 00:10:50.269 04:57:06 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77710 ']' 00:10:50.269 04:57:06 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.269 04:57:06 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.269 04:57:06 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.269 04:57:06 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.269 04:57:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.269 [2024-11-21 04:57:06.819459] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:50.269 [2024-11-21 04:57:06.819587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77710 ] 00:10:50.269 [2024-11-21 04:57:06.978653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:50.527 [2024-11-21 04:57:07.004118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.527 [2024-11-21 04:57:07.004166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.098 04:57:07 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.098 04:57:07 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:51.098 04:57:07 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:51.358 Nvme0n1 00:10:51.358 04:57:07 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:51.358 04:57:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:51.618 request: 00:10:51.618 { 00:10:51.618 "bdev_name": "Nvme0n1", 00:10:51.618 "filename": "non_existing_file", 00:10:51.618 "method": "bdev_nvme_apply_firmware", 00:10:51.618 "req_id": 1 00:10:51.618 } 00:10:51.618 Got JSON-RPC error response 00:10:51.618 response: 00:10:51.618 { 00:10:51.618 "code": -32603, 00:10:51.618 "message": "open file failed." 00:10:51.618 } 00:10:51.618 04:57:08 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:51.618 04:57:08 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:51.618 04:57:08 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:51.618 04:57:08 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:51.618 04:57:08 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77710 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77710 ']' 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77710 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77710 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.618 killing process with pid 77710 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77710' 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77710 00:10:51.618 04:57:08 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77710 00:10:52.189 00:10:52.189 real 0m2.133s 00:10:52.189 user 0m4.097s 00:10:52.189 sys 0m0.532s 00:10:52.189 ************************************ 00:10:52.189 END TEST nvme_rpc 00:10:52.189 04:57:08 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.189 04:57:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.189 ************************************ 00:10:52.189 04:57:08 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:52.189 04:57:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:52.189 04:57:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.189 04:57:08 -- common/autotest_common.sh@10 -- # set +x 00:10:52.189 ************************************ 00:10:52.189 START TEST nvme_rpc_timeouts 00:10:52.189 ************************************ 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:52.189 * Looking for test storage... 00:10:52.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.189 04:57:08 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.189 --rc genhtml_function_coverage=1 00:10:52.189 --rc genhtml_legend=1 00:10:52.189 --rc geninfo_all_blocks=1 00:10:52.189 --rc geninfo_unexecuted_blocks=1 00:10:52.189 00:10:52.189 ' 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.189 --rc genhtml_function_coverage=1 00:10:52.189 --rc genhtml_legend=1 00:10:52.189 --rc geninfo_all_blocks=1 00:10:52.189 --rc geninfo_unexecuted_blocks=1 00:10:52.189 00:10:52.189 ' 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.189 --rc genhtml_function_coverage=1 00:10:52.189 --rc genhtml_legend=1 00:10:52.189 --rc geninfo_all_blocks=1 00:10:52.189 --rc geninfo_unexecuted_blocks=1 00:10:52.189 00:10:52.189 ' 00:10:52.189 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.189 --rc genhtml_branch_coverage=1 00:10:52.190 --rc genhtml_function_coverage=1 00:10:52.190 --rc genhtml_legend=1 00:10:52.190 --rc geninfo_all_blocks=1 00:10:52.190 --rc geninfo_unexecuted_blocks=1 00:10:52.190 00:10:52.190 ' 00:10:52.190 04:57:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.190 04:57:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77764 00:10:52.190 04:57:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77764 00:10:52.190 04:57:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77796 00:10:52.190 04:57:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:52.190 04:57:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77796 00:10:52.190 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77796 ']' 00:10:52.190 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.190 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.190 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.190 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.190 04:57:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:52.190 04:57:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:52.450 [2024-11-21 04:57:08.954319] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:10:52.450 [2024-11-21 04:57:08.954449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77796 ] 00:10:52.450 [2024-11-21 04:57:09.112119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:52.450 [2024-11-21 04:57:09.139222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.450 [2024-11-21 04:57:09.139296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.391 04:57:09 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.391 04:57:09 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:53.391 Checking default timeout settings: 00:10:53.391 04:57:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:53.391 04:57:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:53.649 Making settings changes with rpc: 00:10:53.649 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:53.649 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:53.649 Check default vs. modified settings: 00:10:53.649 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:53.649 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77764 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77764 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:54.215 Setting action_on_timeout is changed as expected. 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77764 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77764 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:54.215 Setting timeout_us is changed as expected. 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77764 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77764 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:54.215 Setting timeout_admin_us is changed as expected. 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77764 /tmp/settings_modified_77764 00:10:54.215 04:57:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77796 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77796 ']' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77796 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77796 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.215 killing process with pid 77796 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77796' 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77796 00:10:54.215 04:57:10 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77796 00:10:54.473 RPC TIMEOUT SETTING TEST PASSED. 00:10:54.473 04:57:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:54.473 00:10:54.473 real 0m2.322s 00:10:54.473 user 0m4.608s 00:10:54.473 sys 0m0.532s 00:10:54.473 04:57:11 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.473 04:57:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:54.473 ************************************ 00:10:54.473 END TEST nvme_rpc_timeouts 00:10:54.473 ************************************ 00:10:54.473 04:57:11 -- spdk/autotest.sh@239 -- # uname -s 00:10:54.473 04:57:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:54.473 04:57:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:54.473 04:57:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:54.473 04:57:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.473 04:57:11 -- common/autotest_common.sh@10 -- # set +x 00:10:54.473 ************************************ 00:10:54.473 START TEST sw_hotplug 00:10:54.473 ************************************ 00:10:54.473 04:57:11 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:54.473 * Looking for test storage... 00:10:54.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:54.473 04:57:11 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.473 04:57:11 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.473 04:57:11 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.732 04:57:11 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.732 04:57:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:54.732 04:57:11 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.732 04:57:11 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.732 --rc genhtml_branch_coverage=1 00:10:54.732 --rc genhtml_function_coverage=1 00:10:54.732 --rc genhtml_legend=1 00:10:54.732 --rc geninfo_all_blocks=1 00:10:54.732 --rc geninfo_unexecuted_blocks=1 00:10:54.732 00:10:54.732 ' 00:10:54.732 04:57:11 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.732 --rc genhtml_branch_coverage=1 00:10:54.732 --rc genhtml_function_coverage=1 00:10:54.732 --rc genhtml_legend=1 00:10:54.732 --rc geninfo_all_blocks=1 00:10:54.732 --rc geninfo_unexecuted_blocks=1 00:10:54.732 00:10:54.732 ' 00:10:54.732 04:57:11 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.732 --rc genhtml_branch_coverage=1 00:10:54.732 --rc genhtml_function_coverage=1 00:10:54.732 --rc genhtml_legend=1 00:10:54.732 --rc geninfo_all_blocks=1 00:10:54.732 --rc geninfo_unexecuted_blocks=1 00:10:54.732 00:10:54.732 ' 00:10:54.732 04:57:11 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.732 --rc genhtml_branch_coverage=1 00:10:54.732 --rc genhtml_function_coverage=1 00:10:54.732 --rc genhtml_legend=1 00:10:54.732 --rc geninfo_all_blocks=1 00:10:54.732 --rc geninfo_unexecuted_blocks=1 00:10:54.732 00:10:54.732 ' 00:10:54.732 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:54.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:54.991 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:54.991 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:54.991 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:54.991 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:54.991 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:54.991 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:54.991 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:54.991 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:54.991 04:57:11 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:54.991 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:54.991 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:54.991 04:57:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:55.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:55.507 Waiting for block devices as requested 00:10:55.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:55.507 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:55.809 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:55.809 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.096 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:01.096 04:57:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:01.096 04:57:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:01.096 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:01.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:01.096 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:01.354 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:01.612 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.612 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:01.612 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:01.612 04:57:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78643 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:01.871 04:57:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:01.871 04:57:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:01.871 04:57:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:01.871 04:57:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:01.871 04:57:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:01.871 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:01.872 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:01.872 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:01.872 04:57:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:01.872 Initializing NVMe Controllers 00:11:01.872 Attaching to 0000:00:10.0 00:11:01.872 Attaching to 0000:00:11.0 00:11:02.133 Attached to 0000:00:10.0 00:11:02.133 Attached to 0000:00:11.0 00:11:02.133 Initialization complete. Starting I/O... 00:11:02.133 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:02.133 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:02.133 00:11:03.074 QEMU NVMe Ctrl (12340 ): 2888 I/Os completed (+2888) 00:11:03.074 QEMU NVMe Ctrl (12341 ): 2894 I/Os completed (+2894) 00:11:03.074 00:11:04.007 QEMU NVMe Ctrl (12340 ): 7256 I/Os completed (+4368) 00:11:04.007 QEMU NVMe Ctrl (12341 ): 7426 I/Os completed (+4532) 00:11:04.007 00:11:04.939 QEMU NVMe Ctrl (12340 ): 11763 I/Os completed (+4507) 00:11:04.939 QEMU NVMe Ctrl (12341 ): 11835 I/Os completed (+4409) 00:11:04.939 00:11:05.920 QEMU NVMe Ctrl (12340 ): 15495 I/Os completed (+3732) 00:11:05.920 QEMU NVMe Ctrl (12341 ): 15922 I/Os completed (+4087) 00:11:05.920 00:11:07.318 QEMU NVMe Ctrl (12340 ): 19046 I/Os completed (+3551) 00:11:07.318 QEMU NVMe Ctrl (12341 ): 19435 I/Os completed (+3513) 00:11:07.318 00:11:07.887 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.887 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.887 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.887 [2024-11-21 04:57:24.418904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:07.887 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:07.887 [2024-11-21 04:57:24.419964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.887 [2024-11-21 04:57:24.420010] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.420025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.420042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:07.888 [2024-11-21 04:57:24.421393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.421437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.421451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.421466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.888 [2024-11-21 04:57:24.446290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:07.888 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:07.888 [2024-11-21 04:57:24.447392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.447517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.447592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.447652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:07.888 [2024-11-21 04:57:24.448872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.448970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.449220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 [2024-11-21 04:57:24.449282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:07.888 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.888 EAL: Scan for (pci) bus failed. 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.888 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.888 Attaching to 0000:00:10.0 00:11:07.888 Attached to 0000:00:10.0 00:11:07.888 QEMU NVMe Ctrl (12340 ): 12 I/Os completed (+12) 00:11:07.888 00:11:08.148 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:08.148 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.148 04:57:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:08.148 Attaching to 0000:00:11.0 00:11:08.148 Attached to 0000:00:11.0 00:11:09.089 QEMU NVMe Ctrl (12340 ): 3567 I/Os completed (+3555) 00:11:09.089 QEMU NVMe Ctrl (12341 ): 3578 I/Os completed (+3578) 00:11:09.089 00:11:10.028 QEMU NVMe Ctrl (12340 ): 7192 I/Os completed (+3625) 00:11:10.028 QEMU NVMe Ctrl (12341 ): 7237 I/Os completed (+3659) 00:11:10.028 00:11:10.959 QEMU NVMe Ctrl (12340 ): 12008 I/Os completed (+4816) 00:11:10.959 QEMU NVMe Ctrl (12341 ): 11920 I/Os completed (+4683) 00:11:10.959 00:11:11.891 QEMU NVMe Ctrl (12340 ): 16563 I/Os completed (+4555) 00:11:11.891 QEMU NVMe Ctrl (12341 ): 16340 I/Os completed (+4420) 00:11:11.891 00:11:13.265 QEMU NVMe Ctrl (12340 ): 21455 I/Os completed (+4892) 00:11:13.265 QEMU NVMe Ctrl (12341 ): 20980 I/Os completed (+4640) 00:11:13.265 00:11:14.198 QEMU NVMe Ctrl (12340 ): 26242 I/Os completed (+4787) 00:11:14.198 QEMU NVMe Ctrl (12341 ): 25661 I/Os completed (+4681) 00:11:14.198 00:11:15.132 QEMU NVMe Ctrl (12340 ): 30510 I/Os completed (+4268) 00:11:15.132 QEMU NVMe Ctrl (12341 ): 30297 I/Os completed (+4636) 00:11:15.132 00:11:16.069 QEMU NVMe Ctrl (12340 ): 34259 I/Os completed (+3749) 00:11:16.069 QEMU NVMe Ctrl (12341 ): 34658 I/Os completed (+4361) 00:11:16.069 00:11:17.017 QEMU NVMe Ctrl (12340 ): 38979 I/Os completed (+4720) 00:11:17.017 QEMU NVMe Ctrl (12341 ): 40008 I/Os completed (+5350) 00:11:17.017 00:11:17.953 QEMU NVMe Ctrl (12340 ): 43753 I/Os completed (+4774) 00:11:17.953 QEMU NVMe Ctrl (12341 ): 44539 I/Os completed (+4531) 00:11:17.953 00:11:18.887 QEMU NVMe Ctrl (12340 ): 48498 I/Os completed (+4745) 00:11:18.887 QEMU NVMe Ctrl (12341 ): 49142 I/Os completed (+4603) 00:11:18.887 00:11:20.273 QEMU NVMe Ctrl (12340 ): 52091 I/Os completed (+3593) 00:11:20.273 QEMU NVMe Ctrl (12341 ): 52810 I/Os completed (+3668) 00:11:20.273 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.273 [2024-11-21 04:57:36.679919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:20.273 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:20.273 [2024-11-21 04:57:36.681049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.681156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.681202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.681269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:20.273 [2024-11-21 04:57:36.683312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.683489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.683563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.683595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.273 [2024-11-21 04:57:36.700097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:20.273 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:20.273 [2024-11-21 04:57:36.701743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.701848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.701911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.701942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:20.273 [2024-11-21 04:57:36.703484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.703524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.703544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 [2024-11-21 04:57:36.703557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:20.273 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:20.273 EAL: Scan for (pci) bus failed. 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:20.273 Attaching to 0000:00:10.0 00:11:20.273 Attached to 0000:00:10.0 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.273 04:57:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:20.273 Attaching to 0000:00:11.0 00:11:20.273 Attached to 0000:00:11.0 00:11:21.216 QEMU NVMe Ctrl (12340 ): 2411 I/Os completed (+2411) 00:11:21.216 QEMU NVMe Ctrl (12341 ): 2128 I/Os completed (+2128) 00:11:21.216 00:11:22.159 QEMU NVMe Ctrl (12340 ): 5713 I/Os completed (+3302) 00:11:22.159 QEMU NVMe Ctrl (12341 ): 5453 I/Os completed (+3325) 00:11:22.159 00:11:23.098 QEMU NVMe Ctrl (12340 ): 9424 I/Os completed (+3711) 00:11:23.098 QEMU NVMe Ctrl (12341 ): 9139 I/Os completed (+3686) 00:11:23.098 00:11:24.038 QEMU NVMe Ctrl (12340 ): 13096 I/Os completed (+3672) 00:11:24.039 QEMU NVMe Ctrl (12341 ): 12884 I/Os completed (+3745) 00:11:24.039 00:11:24.982 QEMU NVMe Ctrl (12340 ): 16380 I/Os completed (+3284) 00:11:24.982 QEMU NVMe Ctrl (12341 ): 16249 I/Os completed (+3365) 00:11:24.982 00:11:25.922 QEMU NVMe Ctrl (12340 ): 18992 I/Os completed (+2612) 00:11:25.922 QEMU NVMe Ctrl (12341 ): 18868 I/Os completed (+2619) 00:11:25.922 00:11:27.308 QEMU NVMe Ctrl (12340 ): 21865 I/Os completed (+2873) 00:11:27.308 QEMU NVMe Ctrl (12341 ): 21812 I/Os completed (+2944) 00:11:27.308 00:11:27.877 QEMU NVMe Ctrl (12340 ): 25658 I/Os completed (+3793) 00:11:27.877 QEMU NVMe Ctrl (12341 ): 26070 I/Os completed (+4258) 00:11:27.877 00:11:29.258 QEMU NVMe Ctrl (12340 ): 29137 I/Os completed (+3479) 00:11:29.258 QEMU NVMe Ctrl (12341 ): 29557 I/Os completed (+3487) 00:11:29.258 00:11:30.198 QEMU NVMe Ctrl (12340 ): 33745 I/Os completed (+4608) 00:11:30.198 QEMU NVMe Ctrl (12341 ): 34476 I/Os completed (+4919) 00:11:30.198 00:11:31.139 QEMU NVMe Ctrl (12340 ): 37261 I/Os completed (+3516) 00:11:31.139 QEMU NVMe Ctrl (12341 ): 37992 I/Os completed (+3516) 00:11:31.139 00:11:32.143 QEMU NVMe Ctrl (12340 ): 40948 I/Os completed (+3687) 00:11:32.143 QEMU NVMe Ctrl (12341 ): 41679 I/Os completed (+3687) 00:11:32.143 00:11:32.429 04:57:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:32.429 04:57:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.429 04:57:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.429 04:57:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.429 [2024-11-21 04:57:49.000910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:32.429 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:32.429 [2024-11-21 04:57:49.002100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.429 [2024-11-21 04:57:49.002208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.429 [2024-11-21 04:57:49.002245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.429 [2024-11-21 04:57:49.002311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.429 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:32.429 [2024-11-21 04:57:49.003904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.429 [2024-11-21 04:57:49.004012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.004120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.004155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.430 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.430 [2024-11-21 04:57:49.021628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:32.430 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:32.430 [2024-11-21 04:57:49.022658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.022772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.022811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.022842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:32.430 [2024-11-21 04:57:49.024022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.024102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.024135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 [2024-11-21 04:57:49.024159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.430 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:32.430 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.430 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:32.430 EAL: Scan for (pci) bus failed. 00:11:32.430 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.430 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.430 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.690 Attaching to 0000:00:10.0 00:11:32.690 Attached to 0000:00:10.0 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.690 04:57:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.690 Attaching to 0000:00:11.0 00:11:32.690 Attached to 0000:00:11.0 00:11:32.690 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:32.690 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:32.690 [2024-11-21 04:57:49.324981] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:44.924 04:58:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:44.924 04:58:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.924 04:58:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.91 00:11:44.924 04:58:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.91 00:11:44.924 04:58:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:44.924 04:58:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.91 00:11:44.924 04:58:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.91 2 00:11:44.924 remove_attach_helper took 42.91s to complete (handling 2 nvme drive(s)) 04:58:01 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78643 00:11:51.509 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78643) - No such process 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78643 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79186 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79186 00:11:51.509 04:58:07 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 79186 ']' 00:11:51.509 04:58:07 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.509 04:58:07 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.509 04:58:07 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.509 04:58:07 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.509 04:58:07 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:51.509 04:58:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.509 [2024-11-21 04:58:07.410522] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:11:51.509 [2024-11-21 04:58:07.410807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79186 ] 00:11:51.510 [2024-11-21 04:58:07.571945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.510 [2024-11-21 04:58:07.595957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:51.778 04:58:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:51.778 04:58:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.349 04:58:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.349 04:58:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.349 04:58:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.349 [2024-11-21 04:58:14.378102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:58.349 [2024-11-21 04:58:14.379265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.379301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.379315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 [2024-11-21 04:58:14.379330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.379339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.379346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 [2024-11-21 04:58:14.379357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.379364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.379372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 [2024-11-21 04:58:14.379379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.379387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.379393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.349 04:58:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.349 04:58:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.349 [2024-11-21 04:58:14.878096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:58.349 [2024-11-21 04:58:14.879258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.879380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.879396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 [2024-11-21 04:58:14.879409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.879416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.879424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 [2024-11-21 04:58:14.879431] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.879440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.879446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 [2024-11-21 04:58:14.879456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.349 [2024-11-21 04:58:14.879462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.349 [2024-11-21 04:58:14.879470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.349 04:58:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:58.349 04:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.916 04:58:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.916 04:58:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.916 04:58:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.916 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:59.173 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:59.173 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:59.173 04:58:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.371 04:58:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.371 04:58:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.371 04:58:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.371 04:58:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.371 04:58:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.371 [2024-11-21 04:58:27.778287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:11.371 [2024-11-21 04:58:27.779548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.371 [2024-11-21 04:58:27.779678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.371 [2024-11-21 04:58:27.779754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.371 [2024-11-21 04:58:27.779789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.371 [2024-11-21 04:58:27.779895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.371 [2024-11-21 04:58:27.779925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.371 [2024-11-21 04:58:27.779990] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.371 [2024-11-21 04:58:27.780045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.371 [2024-11-21 04:58:27.780072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.371 [2024-11-21 04:58:27.780161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.371 [2024-11-21 04:58:27.780192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.371 [2024-11-21 04:58:27.780219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.371 04:58:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:11.371 04:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:11.630 [2024-11-21 04:58:28.178286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:11.630 [2024-11-21 04:58:28.179482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.630 [2024-11-21 04:58:28.179589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.630 [2024-11-21 04:58:28.179679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.630 [2024-11-21 04:58:28.179747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.630 [2024-11-21 04:58:28.179766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.630 [2024-11-21 04:58:28.179832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.630 [2024-11-21 04:58:28.179862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.630 [2024-11-21 04:58:28.179880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.630 [2024-11-21 04:58:28.179940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.630 [2024-11-21 04:58:28.180055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.630 [2024-11-21 04:58:28.180073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.631 [2024-11-21 04:58:28.180143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.631 04:58:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.631 04:58:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.631 04:58:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:11.631 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.889 04:58:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.090 04:58:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.090 04:58:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.090 04:58:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.090 04:58:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.090 04:58:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.090 04:58:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:24.090 04:58:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:24.090 [2024-11-21 04:58:40.678493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:24.090 [2024-11-21 04:58:40.679574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.090 [2024-11-21 04:58:40.679604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.090 [2024-11-21 04:58:40.679628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.090 [2024-11-21 04:58:40.679642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.090 [2024-11-21 04:58:40.679651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.090 [2024-11-21 04:58:40.679658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.090 [2024-11-21 04:58:40.679667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.090 [2024-11-21 04:58:40.679674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.090 [2024-11-21 04:58:40.679682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.090 [2024-11-21 04:58:40.679688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.090 [2024-11-21 04:58:40.679696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.090 [2024-11-21 04:58:40.679702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.656 04:58:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.656 04:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.656 [2024-11-21 04:58:41.178491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:24.656 [2024-11-21 04:58:41.179557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.656 [2024-11-21 04:58:41.179586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.656 [2024-11-21 04:58:41.179597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.656 [2024-11-21 04:58:41.179620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.656 [2024-11-21 04:58:41.179627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.656 [2024-11-21 04:58:41.179642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.656 [2024-11-21 04:58:41.179650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.656 [2024-11-21 04:58:41.179658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.656 [2024-11-21 04:58:41.179665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.656 [2024-11-21 04:58:41.179673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.656 [2024-11-21 04:58:41.179679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.656 [2024-11-21 04:58:41.179688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.656 04:58:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:24.656 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:25.222 04:58:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.222 04:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:25.222 04:58:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:25.222 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:25.480 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:25.480 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:25.480 04:58:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:37.683 04:58:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:37.683 04:58:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:37.683 04:58:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:37.683 04:58:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.683 04:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.683 04:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.683 04:58:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 04:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.72 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.72 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:12:37.683 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:37.683 04:58:54 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:37.683 04:58:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.243 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.243 04:59:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.243 04:59:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.244 04:59:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:44.244 [2024-11-21 04:59:00.130567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:44.244 [2024-11-21 04:59:00.131459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.131493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.131507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 [2024-11-21 04:59:00.131522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.131531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.131540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 [2024-11-21 04:59:00.131549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.131556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.131567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 [2024-11-21 04:59:00.131573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.131581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.131587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.244 04:59:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.244 04:59:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.244 [2024-11-21 04:59:00.630568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:44.244 [2024-11-21 04:59:00.631402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.631438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.631449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 [2024-11-21 04:59:00.631464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.631471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.631481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 [2024-11-21 04:59:00.631488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.631497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.631503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 [2024-11-21 04:59:00.631511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.244 [2024-11-21 04:59:00.631518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.244 [2024-11-21 04:59:00.631527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.244 04:59:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:44.244 04:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:44.502 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:44.502 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:44.502 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:44.503 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.503 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.503 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.503 04:59:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.503 04:59:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.503 04:59:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.503 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:44.503 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.830 04:59:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:57.027 04:59:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.027 04:59:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.027 04:59:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:57.027 04:59:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.027 04:59:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.027 04:59:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:57.027 04:59:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:57.027 [2024-11-21 04:59:13.530771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:57.027 [2024-11-21 04:59:13.531701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.027 [2024-11-21 04:59:13.531734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.027 [2024-11-21 04:59:13.531748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.027 [2024-11-21 04:59:13.531762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.027 [2024-11-21 04:59:13.531771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.027 [2024-11-21 04:59:13.531778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.027 [2024-11-21 04:59:13.531787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.027 [2024-11-21 04:59:13.531794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.027 [2024-11-21 04:59:13.531802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.027 [2024-11-21 04:59:13.531808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.027 [2024-11-21 04:59:13.531818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.027 [2024-11-21 04:59:13.531824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.594 [2024-11-21 04:59:14.030784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:57.594 [2024-11-21 04:59:14.031574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.594 [2024-11-21 04:59:14.031622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.594 [2024-11-21 04:59:14.031634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.594 [2024-11-21 04:59:14.031648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.594 [2024-11-21 04:59:14.031655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.594 [2024-11-21 04:59:14.031664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.594 [2024-11-21 04:59:14.031671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.594 [2024-11-21 04:59:14.031680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.594 [2024-11-21 04:59:14.031687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.594 [2024-11-21 04:59:14.031695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.594 [2024-11-21 04:59:14.031702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:57.594 [2024-11-21 04:59:14.031710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:57.594 04:59:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.594 04:59:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.594 04:59:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:57.594 04:59:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.795 04:59:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.795 04:59:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.795 04:59:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.795 04:59:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.795 04:59:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.795 04:59:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:09.795 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:09.795 [2024-11-21 04:59:26.430988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:09.795 [2024-11-21 04:59:26.431807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.795 [2024-11-21 04:59:26.431934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.795 [2024-11-21 04:59:26.431953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.795 [2024-11-21 04:59:26.431967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.795 [2024-11-21 04:59:26.431978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.795 [2024-11-21 04:59:26.431986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.795 [2024-11-21 04:59:26.431995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.795 [2024-11-21 04:59:26.432002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.795 [2024-11-21 04:59:26.432011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.795 [2024-11-21 04:59:26.432018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.795 [2024-11-21 04:59:26.432026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.796 [2024-11-21 04:59:26.432033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.361 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:10.361 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:10.361 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:10.361 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:10.361 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:10.361 04:59:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.361 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:10.361 04:59:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:10.361 [2024-11-21 04:59:26.930993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:10.361 [2024-11-21 04:59:26.931931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.361 [2024-11-21 04:59:26.931964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.361 [2024-11-21 04:59:26.931977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.361 [2024-11-21 04:59:26.931992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.361 [2024-11-21 04:59:26.932000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.361 [2024-11-21 04:59:26.932010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.361 [2024-11-21 04:59:26.932017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.361 [2024-11-21 04:59:26.932031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.361 [2024-11-21 04:59:26.932039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.362 [2024-11-21 04:59:26.932048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:10.362 [2024-11-21 04:59:26.932056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:10.362 [2024-11-21 04:59:26.932065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:10.362 04:59:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.362 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:10.362 04:59:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:10.926 04:59:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.926 04:59:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:10.926 04:59:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:10.926 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:11.184 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:11.184 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:11.184 04:59:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.72 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.72 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:13:23.384 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:23.384 04:59:39 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79186 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 79186 ']' 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 79186 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79186 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79186' 00:13:23.384 killing process with pid 79186 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@973 -- # kill 79186 00:13:23.384 04:59:39 sw_hotplug -- common/autotest_common.sh@978 -- # wait 79186 00:13:23.645 04:59:40 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:23.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.167 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.167 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.428 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.428 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.428 00:13:24.428 real 2m30.018s 00:13:24.428 user 1m50.815s 00:13:24.428 sys 0m17.990s 00:13:24.428 04:59:41 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.428 04:59:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:24.428 ************************************ 00:13:24.428 END TEST sw_hotplug 00:13:24.428 ************************************ 00:13:24.691 04:59:41 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:24.691 04:59:41 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:24.691 04:59:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.691 04:59:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.691 04:59:41 -- common/autotest_common.sh@10 -- # set +x 00:13:24.691 ************************************ 00:13:24.691 START TEST nvme_xnvme 00:13:24.691 ************************************ 00:13:24.691 04:59:41 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:24.691 * Looking for test storage... 00:13:24.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:24.691 04:59:41 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.691 04:59:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.692 04:59:41 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.692 --rc genhtml_branch_coverage=1 00:13:24.692 --rc genhtml_function_coverage=1 00:13:24.692 --rc genhtml_legend=1 00:13:24.692 --rc geninfo_all_blocks=1 00:13:24.692 --rc geninfo_unexecuted_blocks=1 00:13:24.692 00:13:24.692 ' 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.692 --rc genhtml_branch_coverage=1 00:13:24.692 --rc genhtml_function_coverage=1 00:13:24.692 --rc genhtml_legend=1 00:13:24.692 --rc geninfo_all_blocks=1 00:13:24.692 --rc geninfo_unexecuted_blocks=1 00:13:24.692 00:13:24.692 ' 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.692 --rc genhtml_branch_coverage=1 00:13:24.692 --rc genhtml_function_coverage=1 00:13:24.692 --rc genhtml_legend=1 00:13:24.692 --rc geninfo_all_blocks=1 00:13:24.692 --rc geninfo_unexecuted_blocks=1 00:13:24.692 00:13:24.692 ' 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.692 --rc genhtml_branch_coverage=1 00:13:24.692 --rc genhtml_function_coverage=1 00:13:24.692 --rc genhtml_legend=1 00:13:24.692 --rc geninfo_all_blocks=1 00:13:24.692 --rc geninfo_unexecuted_blocks=1 00:13:24.692 00:13:24.692 ' 00:13:24.692 04:59:41 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:24.692 04:59:41 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:24.692 04:59:41 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:24.692 04:59:41 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:24.692 04:59:41 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:24.693 04:59:41 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:24.693 04:59:41 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:24.693 04:59:41 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:24.693 #define SPDK_CONFIG_H 00:13:24.693 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:24.693 #define SPDK_CONFIG_APPS 1 00:13:24.693 #define SPDK_CONFIG_ARCH native 00:13:24.693 #define SPDK_CONFIG_ASAN 1 00:13:24.693 #undef SPDK_CONFIG_AVAHI 00:13:24.693 #undef SPDK_CONFIG_CET 00:13:24.693 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:24.693 #define SPDK_CONFIG_COVERAGE 1 00:13:24.693 #define SPDK_CONFIG_CROSS_PREFIX 00:13:24.693 #undef SPDK_CONFIG_CRYPTO 00:13:24.693 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:24.693 #undef SPDK_CONFIG_CUSTOMOCF 00:13:24.693 #undef SPDK_CONFIG_DAOS 00:13:24.693 #define SPDK_CONFIG_DAOS_DIR 00:13:24.693 #define SPDK_CONFIG_DEBUG 1 00:13:24.693 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:24.693 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:13:24.693 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:13:24.693 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:13:24.693 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:24.693 #undef SPDK_CONFIG_DPDK_UADK 00:13:24.693 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:24.693 #define SPDK_CONFIG_EXAMPLES 1 00:13:24.693 #undef SPDK_CONFIG_FC 00:13:24.693 #define SPDK_CONFIG_FC_PATH 00:13:24.693 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:24.693 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:24.693 #define SPDK_CONFIG_FSDEV 1 00:13:24.693 #undef SPDK_CONFIG_FUSE 00:13:24.693 #undef SPDK_CONFIG_FUZZER 00:13:24.693 #define SPDK_CONFIG_FUZZER_LIB 00:13:24.693 #undef SPDK_CONFIG_GOLANG 00:13:24.693 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:24.693 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:24.693 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:24.693 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:24.693 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:24.693 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:24.693 #undef SPDK_CONFIG_HAVE_LZ4 00:13:24.693 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:24.693 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:24.693 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:24.693 #define SPDK_CONFIG_IDXD 1 00:13:24.693 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:24.693 #undef SPDK_CONFIG_IPSEC_MB 00:13:24.693 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:24.693 #define SPDK_CONFIG_ISAL 1 00:13:24.693 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:24.693 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:24.693 #define SPDK_CONFIG_LIBDIR 00:13:24.693 #undef SPDK_CONFIG_LTO 00:13:24.693 #define SPDK_CONFIG_MAX_LCORES 128 00:13:24.693 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:24.693 #define SPDK_CONFIG_NVME_CUSE 1 00:13:24.693 #undef SPDK_CONFIG_OCF 00:13:24.693 #define SPDK_CONFIG_OCF_PATH 00:13:24.693 #define SPDK_CONFIG_OPENSSL_PATH 00:13:24.693 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:24.693 #define SPDK_CONFIG_PGO_DIR 00:13:24.693 #undef SPDK_CONFIG_PGO_USE 00:13:24.693 #define SPDK_CONFIG_PREFIX /usr/local 00:13:24.693 #undef SPDK_CONFIG_RAID5F 00:13:24.693 #undef SPDK_CONFIG_RBD 00:13:24.693 #define SPDK_CONFIG_RDMA 1 00:13:24.693 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:24.693 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:24.693 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:24.693 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:24.693 #define SPDK_CONFIG_SHARED 1 00:13:24.693 #undef SPDK_CONFIG_SMA 00:13:24.693 #define SPDK_CONFIG_TESTS 1 00:13:24.693 #undef SPDK_CONFIG_TSAN 00:13:24.693 #define SPDK_CONFIG_UBLK 1 00:13:24.693 #define SPDK_CONFIG_UBSAN 1 00:13:24.693 #undef SPDK_CONFIG_UNIT_TESTS 00:13:24.693 #undef SPDK_CONFIG_URING 00:13:24.693 #define SPDK_CONFIG_URING_PATH 00:13:24.693 #undef SPDK_CONFIG_URING_ZNS 00:13:24.693 #undef SPDK_CONFIG_USDT 00:13:24.693 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:24.693 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:24.693 #undef SPDK_CONFIG_VFIO_USER 00:13:24.693 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:24.693 #define SPDK_CONFIG_VHOST 1 00:13:24.693 #define SPDK_CONFIG_VIRTIO 1 00:13:24.693 #undef SPDK_CONFIG_VTUNE 00:13:24.693 #define SPDK_CONFIG_VTUNE_DIR 00:13:24.693 #define SPDK_CONFIG_WERROR 1 00:13:24.693 #define SPDK_CONFIG_WPDK_DIR 00:13:24.693 #define SPDK_CONFIG_XNVME 1 00:13:24.693 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:24.693 04:59:41 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.693 04:59:41 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.693 04:59:41 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.693 04:59:41 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.693 04:59:41 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.693 04:59:41 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.693 04:59:41 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.693 04:59:41 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.693 04:59:41 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:24.693 04:59:41 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:24.693 04:59:41 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:24.693 04:59:41 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@140 -- # : v23.11 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:24.694 04:59:41 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:24.695 04:59:41 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80572 ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80572 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.6BPDXn 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.6BPDXn/tests/xnvme /tmp/spdk.6BPDXn 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13245874176 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6338101248 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261960704 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13245874176 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6338101248 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97290973184 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2411806720 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:24.957 * Looking for test storage... 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13245874176 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:24.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:24.957 04:59:41 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.958 --rc genhtml_branch_coverage=1 00:13:24.958 --rc genhtml_function_coverage=1 00:13:24.958 --rc genhtml_legend=1 00:13:24.958 --rc geninfo_all_blocks=1 00:13:24.958 --rc geninfo_unexecuted_blocks=1 00:13:24.958 00:13:24.958 ' 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.958 --rc genhtml_branch_coverage=1 00:13:24.958 --rc genhtml_function_coverage=1 00:13:24.958 --rc genhtml_legend=1 00:13:24.958 --rc geninfo_all_blocks=1 00:13:24.958 --rc geninfo_unexecuted_blocks=1 00:13:24.958 00:13:24.958 ' 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.958 --rc genhtml_branch_coverage=1 00:13:24.958 --rc genhtml_function_coverage=1 00:13:24.958 --rc genhtml_legend=1 00:13:24.958 --rc geninfo_all_blocks=1 00:13:24.958 --rc geninfo_unexecuted_blocks=1 00:13:24.958 00:13:24.958 ' 00:13:24.958 04:59:41 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.958 --rc genhtml_branch_coverage=1 00:13:24.958 --rc genhtml_function_coverage=1 00:13:24.958 --rc genhtml_legend=1 00:13:24.958 --rc geninfo_all_blocks=1 00:13:24.958 --rc geninfo_unexecuted_blocks=1 00:13:24.958 00:13:24.958 ' 00:13:24.958 04:59:41 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.958 04:59:41 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.958 04:59:41 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.958 04:59:41 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.958 04:59:41 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.958 04:59:41 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:24.958 04:59:41 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:24.958 04:59:41 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:25.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.480 Waiting for block devices as requested 00:13:25.480 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.480 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.741 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.741 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:31.034 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:31.034 04:59:47 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:31.301 04:59:47 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:31.301 04:59:47 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:31.562 04:59:48 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:31.562 04:59:48 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:31.562 No valid GPT data, bailing 00:13:31.562 04:59:48 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:31.562 04:59:48 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:31.562 04:59:48 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:31.562 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:31.563 04:59:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:31.563 04:59:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.563 04:59:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.563 04:59:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.563 ************************************ 00:13:31.563 START TEST xnvme_rpc 00:13:31.563 ************************************ 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80961 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80961 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80961 ']' 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.563 04:59:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.563 [2024-11-21 04:59:48.250461] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:31.563 [2024-11-21 04:59:48.250656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80961 ] 00:13:31.824 [2024-11-21 04:59:48.412558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.824 [2024-11-21 04:59:48.453583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.397 xnvme_bdev 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.397 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80961 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80961 ']' 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80961 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80961 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.657 killing process with pid 80961 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80961' 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80961 00:13:32.657 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80961 00:13:33.230 00:13:33.230 real 0m1.612s 00:13:33.230 user 0m1.567s 00:13:33.230 sys 0m0.511s 00:13:33.230 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.230 04:59:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.230 ************************************ 00:13:33.230 END TEST xnvme_rpc 00:13:33.230 ************************************ 00:13:33.230 04:59:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:33.230 04:59:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.230 04:59:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.230 04:59:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.230 ************************************ 00:13:33.230 START TEST xnvme_bdevperf 00:13:33.230 ************************************ 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:33.230 04:59:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:33.230 { 00:13:33.230 "subsystems": [ 00:13:33.230 { 00:13:33.230 "subsystem": "bdev", 00:13:33.230 "config": [ 00:13:33.230 { 00:13:33.230 "params": { 00:13:33.230 "io_mechanism": "libaio", 00:13:33.230 "conserve_cpu": false, 00:13:33.230 "filename": "/dev/nvme0n1", 00:13:33.230 "name": "xnvme_bdev" 00:13:33.230 }, 00:13:33.230 "method": "bdev_xnvme_create" 00:13:33.230 }, 00:13:33.230 { 00:13:33.230 "method": "bdev_wait_for_examine" 00:13:33.230 } 00:13:33.230 ] 00:13:33.230 } 00:13:33.230 ] 00:13:33.230 } 00:13:33.230 [2024-11-21 04:59:49.911570] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:33.230 [2024-11-21 04:59:49.911744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81020 ] 00:13:33.492 [2024-11-21 04:59:50.079176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.492 [2024-11-21 04:59:50.120015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.754 Running I/O for 5 seconds... 00:13:35.645 25102.00 IOPS, 98.05 MiB/s [2024-11-21T04:59:53.321Z] 25536.00 IOPS, 99.75 MiB/s [2024-11-21T04:59:54.724Z] 25463.00 IOPS, 99.46 MiB/s [2024-11-21T04:59:55.295Z] 26524.00 IOPS, 103.61 MiB/s [2024-11-21T04:59:55.295Z] 26694.80 IOPS, 104.28 MiB/s 00:13:38.561 Latency(us) 00:13:38.561 [2024-11-21T04:59:55.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.561 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:38.561 xnvme_bdev : 5.01 26651.49 104.11 0.00 0.00 2396.18 497.82 9628.75 00:13:38.561 [2024-11-21T04:59:55.295Z] =================================================================================================================== 00:13:38.561 [2024-11-21T04:59:55.295Z] Total : 26651.49 104.11 0.00 0.00 2396.18 497.82 9628.75 00:13:38.822 04:59:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.822 04:59:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:38.822 04:59:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:38.822 04:59:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:38.822 04:59:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:38.822 { 00:13:38.822 "subsystems": [ 00:13:38.822 { 00:13:38.822 "subsystem": "bdev", 00:13:38.822 "config": [ 00:13:38.822 { 00:13:38.822 "params": { 00:13:38.822 "io_mechanism": "libaio", 00:13:38.822 "conserve_cpu": false, 00:13:38.822 "filename": "/dev/nvme0n1", 00:13:38.822 "name": "xnvme_bdev" 00:13:38.822 }, 00:13:38.822 "method": "bdev_xnvme_create" 00:13:38.822 }, 00:13:38.822 { 00:13:38.822 "method": "bdev_wait_for_examine" 00:13:38.822 } 00:13:38.822 ] 00:13:38.822 } 00:13:38.822 ] 00:13:38.822 } 00:13:38.822 [2024-11-21 04:59:55.532424] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:38.822 [2024-11-21 04:59:55.532546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81084 ] 00:13:39.083 [2024-11-21 04:59:55.691991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.083 [2024-11-21 04:59:55.716382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.343 Running I/O for 5 seconds... 00:13:41.228 36933.00 IOPS, 144.27 MiB/s [2024-11-21T04:59:58.907Z] 37095.50 IOPS, 144.90 MiB/s [2024-11-21T04:59:59.850Z] 36919.67 IOPS, 144.22 MiB/s [2024-11-21T05:00:01.234Z] 36281.25 IOPS, 141.72 MiB/s 00:13:44.500 Latency(us) 00:13:44.500 [2024-11-21T05:00:01.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.500 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:44.500 xnvme_bdev : 5.00 35828.76 139.96 0.00 0.00 1781.87 237.88 7864.32 00:13:44.500 [2024-11-21T05:00:01.234Z] =================================================================================================================== 00:13:44.500 [2024-11-21T05:00:01.234Z] Total : 35828.76 139.96 0.00 0.00 1781.87 237.88 7864.32 00:13:44.500 00:13:44.501 real 0m11.267s 00:13:44.501 user 0m3.189s 00:13:44.501 sys 0m6.433s 00:13:44.501 05:00:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.501 05:00:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.501 ************************************ 00:13:44.501 END TEST xnvme_bdevperf 00:13:44.501 ************************************ 00:13:44.501 05:00:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:44.501 05:00:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:44.501 05:00:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.501 05:00:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.501 ************************************ 00:13:44.501 START TEST xnvme_fio_plugin 00:13:44.501 ************************************ 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:44.501 05:00:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.501 { 00:13:44.501 "subsystems": [ 00:13:44.501 { 00:13:44.501 "subsystem": "bdev", 00:13:44.501 "config": [ 00:13:44.501 { 00:13:44.501 "params": { 00:13:44.501 "io_mechanism": "libaio", 00:13:44.501 "conserve_cpu": false, 00:13:44.501 "filename": "/dev/nvme0n1", 00:13:44.501 "name": "xnvme_bdev" 00:13:44.501 }, 00:13:44.501 "method": "bdev_xnvme_create" 00:13:44.501 }, 00:13:44.501 { 00:13:44.501 "method": "bdev_wait_for_examine" 00:13:44.501 } 00:13:44.501 ] 00:13:44.501 } 00:13:44.501 ] 00:13:44.501 } 00:13:44.762 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:44.762 fio-3.35 00:13:44.762 Starting 1 thread 00:13:51.353 00:13:51.353 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81200: Thu Nov 21 05:00:06 2024 00:13:51.353 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(646MiB/5002msec) 00:13:51.353 slat (usec): min=4, max=2216, avg=19.08, stdev=94.60 00:13:51.353 clat (usec): min=91, max=14385, avg=1429.67, stdev=584.03 00:13:51.353 lat (usec): min=199, max=14389, avg=1448.75, stdev=575.57 00:13:51.353 clat percentiles (usec): 00:13:51.353 | 1.00th=[ 306], 5.00th=[ 611], 10.00th=[ 775], 20.00th=[ 988], 00:13:51.353 | 30.00th=[ 1139], 40.00th=[ 1287], 50.00th=[ 1401], 60.00th=[ 1516], 00:13:51.353 | 70.00th=[ 1647], 80.00th=[ 1811], 90.00th=[ 2040], 95.00th=[ 2311], 00:13:51.353 | 99.00th=[ 3130], 99.50th=[ 3523], 99.90th=[ 6718], 99.95th=[ 7439], 00:13:51.353 | 99.99th=[ 8848] 00:13:51.353 bw ( KiB/s): min=118848, max=135896, per=98.46%, avg=130129.22, stdev=6046.52, samples=9 00:13:51.353 iops : min=29712, max=33974, avg=32532.22, stdev=1511.69, samples=9 00:13:51.353 lat (usec) : 100=0.01%, 250=0.53%, 500=2.40%, 750=6.25%, 1000=11.75% 00:13:51.353 lat (msec) : 2=67.58%, 4=11.19%, 10=0.30%, 20=0.01% 00:13:51.353 cpu : usr=48.63%, sys=43.37%, ctx=13, majf=0, minf=773 00:13:51.353 IO depths : 1=0.5%, 2=1.1%, 4=2.8%, 8=7.9%, 16=22.7%, 32=62.8%, >=64=2.2% 00:13:51.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.353 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:51.353 issued rwts: total=165274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.353 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:51.353 00:13:51.353 Run status group 0 (all jobs): 00:13:51.353 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=646MiB (677MB), run=5002-5002msec 00:13:51.353 ----------------------------------------------------- 00:13:51.353 Suppressions used: 00:13:51.353 count bytes template 00:13:51.353 1 11 /usr/src/fio/parse.c 00:13:51.353 1 8 libtcmalloc_minimal.so 00:13:51.353 1 904 libcrypto.so 00:13:51.353 ----------------------------------------------------- 00:13:51.353 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.353 05:00:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.353 { 00:13:51.353 "subsystems": [ 00:13:51.353 { 00:13:51.353 "subsystem": "bdev", 00:13:51.353 "config": [ 00:13:51.353 { 00:13:51.353 "params": { 00:13:51.354 "io_mechanism": "libaio", 00:13:51.354 "conserve_cpu": false, 00:13:51.354 "filename": "/dev/nvme0n1", 00:13:51.354 "name": "xnvme_bdev" 00:13:51.354 }, 00:13:51.354 "method": "bdev_xnvme_create" 00:13:51.354 }, 00:13:51.354 { 00:13:51.354 "method": "bdev_wait_for_examine" 00:13:51.354 } 00:13:51.354 ] 00:13:51.354 } 00:13:51.354 ] 00:13:51.354 } 00:13:51.354 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.354 fio-3.35 00:13:51.354 Starting 1 thread 00:13:56.647 00:13:56.647 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81286: Thu Nov 21 05:00:13 2024 00:13:56.647 write: IOPS=15.5k, BW=60.4MiB/s (63.4MB/s)(302MiB/5006msec); 0 zone resets 00:13:56.647 slat (usec): min=4, max=1319, avg=11.38, stdev=40.01 00:13:56.647 clat (usec): min=8, max=23499, avg=4038.55, stdev=4992.91 00:13:56.647 lat (usec): min=59, max=23504, avg=4049.93, stdev=4992.04 00:13:56.647 clat percentiles (usec): 00:13:56.647 | 1.00th=[ 113], 5.00th=[ 255], 10.00th=[ 379], 20.00th=[ 570], 00:13:56.647 | 30.00th=[ 685], 40.00th=[ 750], 50.00th=[ 824], 60.00th=[ 1012], 00:13:56.647 | 70.00th=[ 7832], 80.00th=[10421], 90.00th=[11994], 95.00th=[13042], 00:13:56.647 | 99.00th=[15139], 99.50th=[16057], 99.90th=[18220], 99.95th=[19792], 00:13:56.647 | 99.99th=[22676] 00:13:56.647 bw ( KiB/s): min=57592, max=66424, per=100.00%, avg=61955.56, stdev=3157.71, samples=9 00:13:56.647 iops : min=14398, max=16606, avg=15488.89, stdev=789.43, samples=9 00:13:56.647 lat (usec) : 10=0.01%, 20=0.03%, 50=0.10%, 100=0.51%, 250=4.23% 00:13:56.647 lat (usec) : 500=11.89%, 750=23.73%, 1000=19.29% 00:13:56.647 lat (msec) : 2=7.60%, 4=0.38%, 10=9.77%, 20=22.44%, 50=0.04% 00:13:56.647 cpu : usr=80.40%, sys=9.71%, ctx=16, majf=0, minf=773 00:13:56.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.6%, 32=86.4%, >=64=12.9% 00:13:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.647 complete : 0=0.0%, 4=93.8%, 8=2.3%, 16=2.5%, 32=1.3%, 64=0.1%, >=64=0.0% 00:13:56.647 issued rwts: total=0,77439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.647 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:56.647 00:13:56.647 Run status group 0 (all jobs): 00:13:56.647 WRITE: bw=60.4MiB/s (63.4MB/s), 60.4MiB/s-60.4MiB/s (63.4MB/s-63.4MB/s), io=302MiB (317MB), run=5006-5006msec 00:13:56.909 ----------------------------------------------------- 00:13:56.909 Suppressions used: 00:13:56.909 count bytes template 00:13:56.909 1 11 /usr/src/fio/parse.c 00:13:56.909 1 8 libtcmalloc_minimal.so 00:13:56.909 1 904 libcrypto.so 00:13:56.909 ----------------------------------------------------- 00:13:56.909 00:13:56.909 00:13:56.909 real 0m12.389s 00:13:56.909 user 0m7.762s 00:13:56.909 sys 0m3.342s 00:13:56.909 ************************************ 00:13:56.909 END TEST xnvme_fio_plugin 00:13:56.909 ************************************ 00:13:56.909 05:00:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.909 05:00:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:56.909 05:00:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:56.909 05:00:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:56.909 05:00:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:56.909 05:00:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:56.909 05:00:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.909 05:00:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.909 05:00:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.909 ************************************ 00:13:56.909 START TEST xnvme_rpc 00:13:56.909 ************************************ 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:56.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81361 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81361 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81361 ']' 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.909 05:00:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.171 [2024-11-21 05:00:13.724399] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:57.171 [2024-11-21 05:00:13.724851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81361 ] 00:13:57.171 [2024-11-21 05:00:13.885295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.432 [2024-11-21 05:00:13.944360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.005 xnvme_bdev 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.005 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.267 05:00:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81361 00:13:58.267 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81361 ']' 00:13:58.267 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81361 00:13:58.267 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:58.268 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.268 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81361 00:13:58.268 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.268 killing process with pid 81361 00:13:58.268 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.268 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81361' 00:13:58.268 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81361 00:13:58.268 05:00:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81361 00:13:58.841 ************************************ 00:13:58.841 END TEST xnvme_rpc 00:13:58.841 ************************************ 00:13:58.841 00:13:58.841 real 0m1.651s 00:13:58.841 user 0m1.595s 00:13:58.841 sys 0m0.535s 00:13:58.841 05:00:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.841 05:00:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.841 05:00:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:58.841 05:00:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.841 05:00:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.841 05:00:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.841 ************************************ 00:13:58.841 START TEST xnvme_bdevperf 00:13:58.841 ************************************ 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:58.841 05:00:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:58.841 { 00:13:58.841 "subsystems": [ 00:13:58.841 { 00:13:58.841 "subsystem": "bdev", 00:13:58.841 "config": [ 00:13:58.841 { 00:13:58.841 "params": { 00:13:58.841 "io_mechanism": "libaio", 00:13:58.841 "conserve_cpu": true, 00:13:58.841 "filename": "/dev/nvme0n1", 00:13:58.841 "name": "xnvme_bdev" 00:13:58.841 }, 00:13:58.841 "method": "bdev_xnvme_create" 00:13:58.841 }, 00:13:58.841 { 00:13:58.841 "method": "bdev_wait_for_examine" 00:13:58.841 } 00:13:58.841 ] 00:13:58.841 } 00:13:58.841 ] 00:13:58.841 } 00:13:58.841 [2024-11-21 05:00:15.437831] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:13:58.842 [2024-11-21 05:00:15.437992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81419 ] 00:13:59.103 [2024-11-21 05:00:15.601107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.104 [2024-11-21 05:00:15.641591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.104 Running I/O for 5 seconds... 00:14:01.435 32076.00 IOPS, 125.30 MiB/s [2024-11-21T05:00:19.113Z] 32449.00 IOPS, 126.75 MiB/s [2024-11-21T05:00:20.055Z] 34203.67 IOPS, 133.61 MiB/s [2024-11-21T05:00:20.998Z] 35455.75 IOPS, 138.50 MiB/s [2024-11-21T05:00:20.998Z] 36311.40 IOPS, 141.84 MiB/s 00:14:04.264 Latency(us) 00:14:04.264 [2024-11-21T05:00:20.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.264 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:04.264 xnvme_bdev : 5.00 36294.01 141.77 0.00 0.00 1759.01 69.32 17845.96 00:14:04.264 [2024-11-21T05:00:20.998Z] =================================================================================================================== 00:14:04.264 [2024-11-21T05:00:20.998Z] Total : 36294.01 141.77 0.00 0.00 1759.01 69.32 17845.96 00:14:04.264 05:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:04.264 05:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:04.264 05:00:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:04.264 05:00:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:04.264 05:00:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:04.526 { 00:14:04.526 "subsystems": [ 00:14:04.526 { 00:14:04.526 "subsystem": "bdev", 00:14:04.526 "config": [ 00:14:04.526 { 00:14:04.526 "params": { 00:14:04.526 "io_mechanism": "libaio", 00:14:04.526 "conserve_cpu": true, 00:14:04.526 "filename": "/dev/nvme0n1", 00:14:04.526 "name": "xnvme_bdev" 00:14:04.526 }, 00:14:04.526 "method": "bdev_xnvme_create" 00:14:04.526 }, 00:14:04.526 { 00:14:04.526 "method": "bdev_wait_for_examine" 00:14:04.526 } 00:14:04.526 ] 00:14:04.526 } 00:14:04.526 ] 00:14:04.526 } 00:14:04.526 [2024-11-21 05:00:21.055161] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:04.526 [2024-11-21 05:00:21.055287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81489 ] 00:14:04.526 [2024-11-21 05:00:21.213870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.526 [2024-11-21 05:00:21.238744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.787 Running I/O for 5 seconds... 00:14:06.671 5438.00 IOPS, 21.24 MiB/s [2024-11-21T05:00:24.347Z] 5546.00 IOPS, 21.66 MiB/s [2024-11-21T05:00:25.730Z] 5673.00 IOPS, 22.16 MiB/s [2024-11-21T05:00:26.704Z] 5797.75 IOPS, 22.65 MiB/s [2024-11-21T05:00:26.704Z] 6126.40 IOPS, 23.93 MiB/s 00:14:09.970 Latency(us) 00:14:09.970 [2024-11-21T05:00:26.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.970 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:09.970 xnvme_bdev : 5.01 6128.96 23.94 0.00 0.00 10430.40 45.10 35691.91 00:14:09.970 [2024-11-21T05:00:26.704Z] =================================================================================================================== 00:14:09.970 [2024-11-21T05:00:26.704Z] Total : 6128.96 23.94 0.00 0.00 10430.40 45.10 35691.91 00:14:09.970 00:14:09.970 real 0m11.197s 00:14:09.970 user 0m6.440s 00:14:09.970 sys 0m3.552s 00:14:09.970 05:00:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.970 05:00:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 ************************************ 00:14:09.970 END TEST xnvme_bdevperf 00:14:09.970 ************************************ 00:14:09.970 05:00:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:09.970 05:00:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.970 05:00:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.970 05:00:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 ************************************ 00:14:09.970 START TEST xnvme_fio_plugin 00:14:09.970 ************************************ 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:09.970 05:00:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.970 { 00:14:09.970 "subsystems": [ 00:14:09.970 { 00:14:09.970 "subsystem": "bdev", 00:14:09.970 "config": [ 00:14:09.970 { 00:14:09.970 "params": { 00:14:09.970 "io_mechanism": "libaio", 00:14:09.970 "conserve_cpu": true, 00:14:09.970 "filename": "/dev/nvme0n1", 00:14:09.970 "name": "xnvme_bdev" 00:14:09.970 }, 00:14:09.970 "method": "bdev_xnvme_create" 00:14:09.970 }, 00:14:09.970 { 00:14:09.970 "method": "bdev_wait_for_examine" 00:14:09.970 } 00:14:09.970 ] 00:14:09.970 } 00:14:09.970 ] 00:14:09.970 } 00:14:10.232 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:10.232 fio-3.35 00:14:10.232 Starting 1 thread 00:14:15.525 00:14:15.525 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81593: Thu Nov 21 05:00:32 2024 00:14:15.525 read: IOPS=40.9k, BW=160MiB/s (167MB/s)(798MiB/5001msec) 00:14:15.525 slat (usec): min=4, max=1415, avg=19.80, stdev=70.52 00:14:15.525 clat (usec): min=9, max=23330, avg=1025.43, stdev=539.25 00:14:15.525 lat (usec): min=76, max=23334, avg=1045.23, stdev=536.06 00:14:15.525 clat percentiles (usec): 00:14:15.525 | 1.00th=[ 198], 5.00th=[ 318], 10.00th=[ 457], 20.00th=[ 611], 00:14:15.525 | 30.00th=[ 742], 40.00th=[ 857], 50.00th=[ 963], 60.00th=[ 1074], 00:14:15.525 | 70.00th=[ 1205], 80.00th=[ 1385], 90.00th=[ 1631], 95.00th=[ 1893], 00:14:15.525 | 99.00th=[ 2573], 99.50th=[ 2966], 99.90th=[ 4146], 99.95th=[ 5473], 00:14:15.525 | 99.99th=[12518] 00:14:15.525 bw ( KiB/s): min=152168, max=173896, per=100.00%, avg=163415.11, stdev=7243.92, samples=9 00:14:15.525 iops : min=38042, max=43474, avg=40853.78, stdev=1810.98, samples=9 00:14:15.525 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.01%, 250=2.68% 00:14:15.525 lat (usec) : 500=9.65%, 750=18.19%, 1000=23.06% 00:14:15.525 lat (msec) : 2=42.70%, 4=3.60%, 10=0.09%, 20=0.02%, 50=0.01% 00:14:15.525 cpu : usr=33.48%, sys=57.22%, ctx=33, majf=0, minf=773 00:14:15.525 IO depths : 1=0.3%, 2=0.9%, 4=3.2%, 8=9.5%, 16=24.4%, 32=59.7%, >=64=2.0% 00:14:15.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.525 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:14:15.525 issued rwts: total=204312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.525 00:14:15.525 Run status group 0 (all jobs): 00:14:15.525 READ: bw=160MiB/s (167MB/s), 160MiB/s-160MiB/s (167MB/s-167MB/s), io=798MiB (837MB), run=5001-5001msec 00:14:16.099 ----------------------------------------------------- 00:14:16.099 Suppressions used: 00:14:16.099 count bytes template 00:14:16.099 1 11 /usr/src/fio/parse.c 00:14:16.099 1 8 libtcmalloc_minimal.so 00:14:16.099 1 904 libcrypto.so 00:14:16.099 ----------------------------------------------------- 00:14:16.099 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:16.099 05:00:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.099 { 00:14:16.099 "subsystems": [ 00:14:16.099 { 00:14:16.099 "subsystem": "bdev", 00:14:16.099 "config": [ 00:14:16.099 { 00:14:16.099 "params": { 00:14:16.099 "io_mechanism": "libaio", 00:14:16.099 "conserve_cpu": true, 00:14:16.099 "filename": "/dev/nvme0n1", 00:14:16.099 "name": "xnvme_bdev" 00:14:16.099 }, 00:14:16.099 "method": "bdev_xnvme_create" 00:14:16.099 }, 00:14:16.099 { 00:14:16.099 "method": "bdev_wait_for_examine" 00:14:16.099 } 00:14:16.099 ] 00:14:16.099 } 00:14:16.099 ] 00:14:16.099 } 00:14:16.099 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:16.099 fio-3.35 00:14:16.099 Starting 1 thread 00:14:22.693 00:14:22.694 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81679: Thu Nov 21 05:00:38 2024 00:14:22.694 write: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(423MiB/5008msec); 0 zone resets 00:14:22.694 slat (usec): min=4, max=1397, avg=20.24, stdev=71.86 00:14:22.694 clat (usec): min=9, max=413740, avg=2441.60, stdev=17115.54 00:14:22.694 lat (usec): min=53, max=413745, avg=2461.84, stdev=17114.67 00:14:22.694 clat percentiles (usec): 00:14:22.694 | 1.00th=[ 200], 5.00th=[ 347], 10.00th=[ 490], 20.00th=[ 652], 00:14:22.694 | 30.00th=[ 775], 40.00th=[ 906], 50.00th=[ 1037], 60.00th=[ 1172], 00:14:22.694 | 70.00th=[ 1336], 80.00th=[ 1532], 90.00th=[ 1975], 95.00th=[ 2835], 00:14:22.694 | 99.00th=[ 13960], 99.50th=[ 16450], 99.90th=[341836], 99.95th=[408945], 00:14:22.694 | 99.99th=[413139] 00:14:22.694 bw ( KiB/s): min=26520, max=153624, per=100.00%, avg=86506.40, stdev=54546.77, samples=10 00:14:22.694 iops : min= 6630, max=38406, avg=21626.60, stdev=13636.69, samples=10 00:14:22.694 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.09%, 250=1.88% 00:14:22.694 lat (usec) : 500=8.52%, 750=17.59%, 1000=19.23% 00:14:22.694 lat (msec) : 2=43.07%, 4=5.37%, 10=1.53%, 20=2.27%, 50=0.01% 00:14:22.694 lat (msec) : 100=0.02%, 250=0.22%, 500=0.18% 00:14:22.694 cpu : usr=62.57%, sys=30.52%, ctx=27, majf=0, minf=773 00:14:22.694 IO depths : 1=0.2%, 2=0.7%, 4=2.5%, 8=7.8%, 16=21.7%, 32=63.6%, >=64=3.5% 00:14:22.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.694 complete : 0=0.0%, 4=97.5%, 8=0.3%, 16=0.3%, 32=0.3%, 64=1.5%, >=64=0.0% 00:14:22.694 issued rwts: total=0,108187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.694 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:22.694 00:14:22.694 Run status group 0 (all jobs): 00:14:22.694 WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=423MiB (443MB), run=5008-5008msec 00:14:22.694 ----------------------------------------------------- 00:14:22.694 Suppressions used: 00:14:22.694 count bytes template 00:14:22.694 1 11 /usr/src/fio/parse.c 00:14:22.694 1 8 libtcmalloc_minimal.so 00:14:22.694 1 904 libcrypto.so 00:14:22.694 ----------------------------------------------------- 00:14:22.694 00:14:22.694 ************************************ 00:14:22.694 END TEST xnvme_fio_plugin 00:14:22.694 ************************************ 00:14:22.694 00:14:22.694 real 0m12.069s 00:14:22.694 user 0m5.964s 00:14:22.694 sys 0m4.901s 00:14:22.694 05:00:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.694 05:00:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:22.694 05:00:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:22.694 05:00:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:22.694 05:00:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.694 05:00:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:22.694 ************************************ 00:14:22.694 START TEST xnvme_rpc 00:14:22.694 ************************************ 00:14:22.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81760 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81760 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81760 ']' 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.694 05:00:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.694 [2024-11-21 05:00:38.826919] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:22.694 [2024-11-21 05:00:38.827236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81760 ] 00:14:22.694 [2024-11-21 05:00:38.986275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.694 [2024-11-21 05:00:39.010671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.955 xnvme_bdev 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.955 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81760 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81760 ']' 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81760 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81760 00:14:23.216 killing process with pid 81760 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81760' 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81760 00:14:23.216 05:00:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81760 00:14:23.477 00:14:23.477 real 0m1.396s 00:14:23.477 user 0m1.489s 00:14:23.477 sys 0m0.364s 00:14:23.477 05:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.477 ************************************ 00:14:23.477 END TEST xnvme_rpc 00:14:23.477 ************************************ 00:14:23.477 05:00:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.477 05:00:40 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:23.477 05:00:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:23.477 05:00:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.477 05:00:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.737 ************************************ 00:14:23.737 START TEST xnvme_bdevperf 00:14:23.737 ************************************ 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:23.737 05:00:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:23.737 { 00:14:23.737 "subsystems": [ 00:14:23.737 { 00:14:23.737 "subsystem": "bdev", 00:14:23.737 "config": [ 00:14:23.737 { 00:14:23.737 "params": { 00:14:23.737 "io_mechanism": "io_uring", 00:14:23.737 "conserve_cpu": false, 00:14:23.737 "filename": "/dev/nvme0n1", 00:14:23.737 "name": "xnvme_bdev" 00:14:23.737 }, 00:14:23.737 "method": "bdev_xnvme_create" 00:14:23.737 }, 00:14:23.737 { 00:14:23.737 "method": "bdev_wait_for_examine" 00:14:23.737 } 00:14:23.737 ] 00:14:23.737 } 00:14:23.737 ] 00:14:23.737 } 00:14:23.737 [2024-11-21 05:00:40.276232] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:23.737 [2024-11-21 05:00:40.276360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81812 ] 00:14:23.737 [2024-11-21 05:00:40.435980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.737 [2024-11-21 05:00:40.460079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.998 Running I/O for 5 seconds... 00:14:25.886 37854.00 IOPS, 147.87 MiB/s [2024-11-21T05:00:43.565Z] 37619.00 IOPS, 146.95 MiB/s [2024-11-21T05:00:44.950Z] 37349.00 IOPS, 145.89 MiB/s [2024-11-21T05:00:45.893Z] 37276.25 IOPS, 145.61 MiB/s [2024-11-21T05:00:45.893Z] 37122.00 IOPS, 145.01 MiB/s 00:14:29.159 Latency(us) 00:14:29.159 [2024-11-21T05:00:45.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.159 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:29.159 xnvme_bdev : 5.01 37087.65 144.87 0.00 0.00 1721.73 184.32 20366.57 00:14:29.159 [2024-11-21T05:00:45.893Z] =================================================================================================================== 00:14:29.159 [2024-11-21T05:00:45.893Z] Total : 37087.65 144.87 0.00 0.00 1721.73 184.32 20366.57 00:14:29.159 05:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.159 05:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:29.159 05:00:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:29.159 05:00:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:29.159 05:00:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.159 { 00:14:29.159 "subsystems": [ 00:14:29.159 { 00:14:29.159 "subsystem": "bdev", 00:14:29.159 "config": [ 00:14:29.159 { 00:14:29.159 "params": { 00:14:29.159 "io_mechanism": "io_uring", 00:14:29.159 "conserve_cpu": false, 00:14:29.159 "filename": "/dev/nvme0n1", 00:14:29.159 "name": "xnvme_bdev" 00:14:29.159 }, 00:14:29.159 "method": "bdev_xnvme_create" 00:14:29.159 }, 00:14:29.159 { 00:14:29.159 "method": "bdev_wait_for_examine" 00:14:29.159 } 00:14:29.159 ] 00:14:29.159 } 00:14:29.159 ] 00:14:29.159 } 00:14:29.420 [2024-11-21 05:00:45.898110] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:29.420 [2024-11-21 05:00:45.898275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81885 ] 00:14:29.420 [2024-11-21 05:00:46.064181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.420 [2024-11-21 05:00:46.104162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.680 Running I/O for 5 seconds... 00:14:31.566 6256.00 IOPS, 24.44 MiB/s [2024-11-21T05:00:49.685Z] 6410.00 IOPS, 25.04 MiB/s [2024-11-21T05:00:50.258Z] 6710.00 IOPS, 26.21 MiB/s [2024-11-21T05:00:51.644Z] 7025.00 IOPS, 27.44 MiB/s [2024-11-21T05:00:51.644Z] 7115.40 IOPS, 27.79 MiB/s 00:14:34.910 Latency(us) 00:14:34.910 [2024-11-21T05:00:51.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.910 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:34.910 xnvme_bdev : 5.01 7112.71 27.78 0.00 0.00 8985.18 64.59 47387.57 00:14:34.910 [2024-11-21T05:00:51.644Z] =================================================================================================================== 00:14:34.910 [2024-11-21T05:00:51.644Z] Total : 7112.71 27.78 0.00 0.00 8985.18 64.59 47387.57 00:14:34.910 00:14:34.910 real 0m11.308s 00:14:34.910 user 0m4.403s 00:14:34.910 sys 0m6.663s 00:14:34.910 05:00:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.910 ************************************ 00:14:34.910 END TEST xnvme_bdevperf 00:14:34.910 ************************************ 00:14:34.910 05:00:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 05:00:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:34.910 05:00:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:34.910 05:00:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.910 05:00:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 ************************************ 00:14:34.910 START TEST xnvme_fio_plugin 00:14:34.910 ************************************ 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:34.910 05:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:34.910 { 00:14:34.910 "subsystems": [ 00:14:34.910 { 00:14:34.910 "subsystem": "bdev", 00:14:34.910 "config": [ 00:14:34.910 { 00:14:34.910 "params": { 00:14:34.910 "io_mechanism": "io_uring", 00:14:34.910 "conserve_cpu": false, 00:14:34.910 "filename": "/dev/nvme0n1", 00:14:34.910 "name": "xnvme_bdev" 00:14:34.910 }, 00:14:34.910 "method": "bdev_xnvme_create" 00:14:34.910 }, 00:14:34.910 { 00:14:34.910 "method": "bdev_wait_for_examine" 00:14:34.910 } 00:14:34.910 ] 00:14:34.911 } 00:14:34.911 ] 00:14:34.911 } 00:14:35.172 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:35.172 fio-3.35 00:14:35.172 Starting 1 thread 00:14:41.762 00:14:41.762 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81994: Thu Nov 21 05:00:57 2024 00:14:41.762 read: IOPS=34.7k, BW=135MiB/s (142MB/s)(677MiB/5001msec) 00:14:41.762 slat (nsec): min=2722, max=89669, avg=3759.34, stdev=2308.88 00:14:41.762 clat (usec): min=880, max=3750, avg=1689.85, stdev=291.59 00:14:41.762 lat (usec): min=883, max=3783, avg=1693.61, stdev=292.08 00:14:41.762 clat percentiles (usec): 00:14:41.762 | 1.00th=[ 1123], 5.00th=[ 1270], 10.00th=[ 1352], 20.00th=[ 1450], 00:14:41.762 | 30.00th=[ 1516], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1729], 00:14:41.762 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2212], 00:14:41.762 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 2966], 99.95th=[ 3228], 00:14:41.762 | 99.99th=[ 3589] 00:14:41.762 bw ( KiB/s): min=133120, max=154112, per=100.00%, avg=138752.00, stdev=6566.79, samples=9 00:14:41.762 iops : min=33280, max=38528, avg=34688.00, stdev=1641.70, samples=9 00:14:41.762 lat (usec) : 1000=0.12% 00:14:41.762 lat (msec) : 2=86.33%, 4=13.54% 00:14:41.762 cpu : usr=30.76%, sys=67.80%, ctx=13, majf=0, minf=771 00:14:41.762 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:41.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.762 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:41.762 issued rwts: total=173439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:41.762 00:14:41.762 Run status group 0 (all jobs): 00:14:41.762 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=677MiB (710MB), run=5001-5001msec 00:14:41.763 ----------------------------------------------------- 00:14:41.763 Suppressions used: 00:14:41.763 count bytes template 00:14:41.763 1 11 /usr/src/fio/parse.c 00:14:41.763 1 8 libtcmalloc_minimal.so 00:14:41.763 1 904 libcrypto.so 00:14:41.763 ----------------------------------------------------- 00:14:41.763 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:41.763 05:00:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:41.763 { 00:14:41.763 "subsystems": [ 00:14:41.763 { 00:14:41.763 "subsystem": "bdev", 00:14:41.763 "config": [ 00:14:41.763 { 00:14:41.763 "params": { 00:14:41.763 "io_mechanism": "io_uring", 00:14:41.763 "conserve_cpu": false, 00:14:41.763 "filename": "/dev/nvme0n1", 00:14:41.763 "name": "xnvme_bdev" 00:14:41.763 }, 00:14:41.763 "method": "bdev_xnvme_create" 00:14:41.763 }, 00:14:41.763 { 00:14:41.763 "method": "bdev_wait_for_examine" 00:14:41.763 } 00:14:41.763 ] 00:14:41.763 } 00:14:41.763 ] 00:14:41.763 } 00:14:41.763 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:41.763 fio-3.35 00:14:41.763 Starting 1 thread 00:14:47.057 00:14:47.057 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82080: Thu Nov 21 05:01:03 2024 00:14:47.057 write: IOPS=26.5k, BW=103MiB/s (108MB/s)(518MiB/5009msec); 0 zone resets 00:14:47.057 slat (usec): min=2, max=137, avg= 3.97, stdev= 2.27 00:14:47.057 clat (usec): min=43, max=264817, avg=2278.53, stdev=6271.33 00:14:47.057 lat (usec): min=47, max=264820, avg=2282.50, stdev=6271.32 00:14:47.057 clat percentiles (usec): 00:14:47.057 | 1.00th=[ 363], 5.00th=[ 685], 10.00th=[ 938], 20.00th=[ 1270], 00:14:47.057 | 30.00th=[ 1385], 40.00th=[ 1483], 50.00th=[ 1565], 60.00th=[ 1647], 00:14:47.057 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2147], 95.00th=[ 10421], 00:14:47.057 | 99.00th=[ 13698], 99.50th=[ 14484], 99.90th=[ 17957], 99.95th=[ 20055], 00:14:47.057 | 99.99th=[263193] 00:14:47.057 bw ( KiB/s): min=27480, max=162360, per=100.00%, avg=106028.80, stdev=50309.20, samples=10 00:14:47.057 iops : min= 6870, max=40590, avg=26507.20, stdev=12577.30, samples=10 00:14:47.057 lat (usec) : 50=0.01%, 100=0.03%, 250=0.38%, 500=1.77%, 750=4.76% 00:14:47.057 lat (usec) : 1000=3.53% 00:14:47.057 lat (msec) : 2=75.61%, 4=7.61%, 10=1.01%, 20=5.25%, 50=0.01% 00:14:47.057 lat (msec) : 500=0.05% 00:14:47.057 cpu : usr=28.55%, sys=70.27%, ctx=11, majf=0, minf=771 00:14:47.057 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.2%, 16=20.5%, 32=56.4%, >=64=4.1% 00:14:47.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.057 complete : 0=0.0%, 4=97.5%, 8=0.5%, 16=0.5%, 32=0.2%, 64=1.2%, >=64=0.0% 00:14:47.057 issued rwts: total=0,132595,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.057 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:47.057 00:14:47.057 Run status group 0 (all jobs): 00:14:47.057 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=518MiB (543MB), run=5009-5009msec 00:14:47.057 ----------------------------------------------------- 00:14:47.057 Suppressions used: 00:14:47.057 count bytes template 00:14:47.057 1 11 /usr/src/fio/parse.c 00:14:47.057 1 8 libtcmalloc_minimal.so 00:14:47.057 1 904 libcrypto.so 00:14:47.057 ----------------------------------------------------- 00:14:47.057 00:14:47.057 00:14:47.057 real 0m12.188s 00:14:47.057 user 0m4.211s 00:14:47.057 sys 0m7.531s 00:14:47.057 05:01:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.057 ************************************ 00:14:47.057 END TEST xnvme_fio_plugin 00:14:47.057 ************************************ 00:14:47.057 05:01:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:47.319 05:01:03 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:47.319 05:01:03 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:47.319 05:01:03 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:47.319 05:01:03 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:47.319 05:01:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:47.319 05:01:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.319 05:01:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.319 ************************************ 00:14:47.319 START TEST xnvme_rpc 00:14:47.319 ************************************ 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82155 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82155 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82155 ']' 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:47.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.319 05:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.319 [2024-11-21 05:01:03.912877] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:47.319 [2024-11-21 05:01:03.913003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82155 ] 00:14:47.628 [2024-11-21 05:01:04.074913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.628 [2024-11-21 05:01:04.099636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 xnvme_bdev 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82155 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82155 ']' 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82155 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82155 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82155' 00:14:48.219 killing process with pid 82155 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82155 00:14:48.219 05:01:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82155 00:14:48.792 00:14:48.792 real 0m1.399s 00:14:48.792 user 0m1.492s 00:14:48.792 sys 0m0.370s 00:14:48.792 05:01:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.792 ************************************ 00:14:48.792 END TEST xnvme_rpc 00:14:48.792 ************************************ 00:14:48.792 05:01:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.792 05:01:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:48.792 05:01:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:48.792 05:01:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.792 05:01:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:48.792 ************************************ 00:14:48.792 START TEST xnvme_bdevperf 00:14:48.792 ************************************ 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.792 05:01:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.792 { 00:14:48.792 "subsystems": [ 00:14:48.792 { 00:14:48.792 "subsystem": "bdev", 00:14:48.792 "config": [ 00:14:48.792 { 00:14:48.792 "params": { 00:14:48.792 "io_mechanism": "io_uring", 00:14:48.792 "conserve_cpu": true, 00:14:48.792 "filename": "/dev/nvme0n1", 00:14:48.792 "name": "xnvme_bdev" 00:14:48.792 }, 00:14:48.792 "method": "bdev_xnvme_create" 00:14:48.792 }, 00:14:48.792 { 00:14:48.792 "method": "bdev_wait_for_examine" 00:14:48.792 } 00:14:48.792 ] 00:14:48.792 } 00:14:48.792 ] 00:14:48.792 } 00:14:48.792 [2024-11-21 05:01:05.366777] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:48.792 [2024-11-21 05:01:05.366894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82208 ] 00:14:49.054 [2024-11-21 05:01:05.526952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.054 [2024-11-21 05:01:05.551574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.054 Running I/O for 5 seconds... 00:14:50.936 39176.00 IOPS, 153.03 MiB/s [2024-11-21T05:01:09.055Z] 39682.00 IOPS, 155.01 MiB/s [2024-11-21T05:01:09.998Z] 39738.00 IOPS, 155.23 MiB/s [2024-11-21T05:01:10.941Z] 39875.00 IOPS, 155.76 MiB/s [2024-11-21T05:01:10.941Z] 39975.40 IOPS, 156.15 MiB/s 00:14:54.207 Latency(us) 00:14:54.207 [2024-11-21T05:01:10.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.207 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:54.207 xnvme_bdev : 5.01 39939.35 156.01 0.00 0.00 1598.10 362.34 14619.57 00:14:54.207 [2024-11-21T05:01:10.941Z] =================================================================================================================== 00:14:54.207 [2024-11-21T05:01:10.941Z] Total : 39939.35 156.01 0.00 0.00 1598.10 362.34 14619.57 00:14:54.207 05:01:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.207 05:01:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:54.207 05:01:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:54.207 05:01:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:54.207 05:01:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.207 { 00:14:54.207 "subsystems": [ 00:14:54.207 { 00:14:54.207 "subsystem": "bdev", 00:14:54.207 "config": [ 00:14:54.207 { 00:14:54.207 "params": { 00:14:54.207 "io_mechanism": "io_uring", 00:14:54.207 "conserve_cpu": true, 00:14:54.207 "filename": "/dev/nvme0n1", 00:14:54.207 "name": "xnvme_bdev" 00:14:54.207 }, 00:14:54.207 "method": "bdev_xnvme_create" 00:14:54.207 }, 00:14:54.207 { 00:14:54.207 "method": "bdev_wait_for_examine" 00:14:54.207 } 00:14:54.207 ] 00:14:54.207 } 00:14:54.207 ] 00:14:54.207 } 00:14:54.207 [2024-11-21 05:01:10.889913] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:14:54.207 [2024-11-21 05:01:10.890039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82277 ] 00:14:54.469 [2024-11-21 05:01:11.050676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.469 [2024-11-21 05:01:11.075227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.469 Running I/O for 5 seconds... 00:14:56.802 9886.00 IOPS, 38.62 MiB/s [2024-11-21T05:01:14.480Z] 10284.00 IOPS, 40.17 MiB/s [2024-11-21T05:01:15.425Z] 10854.00 IOPS, 42.40 MiB/s [2024-11-21T05:01:16.368Z] 11360.50 IOPS, 44.38 MiB/s [2024-11-21T05:01:16.368Z] 11877.60 IOPS, 46.40 MiB/s 00:14:59.634 Latency(us) 00:14:59.634 [2024-11-21T05:01:16.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.634 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:59.634 xnvme_bdev : 5.01 11874.67 46.39 0.00 0.00 5381.46 59.86 24399.56 00:14:59.634 [2024-11-21T05:01:16.368Z] =================================================================================================================== 00:14:59.634 [2024-11-21T05:01:16.368Z] Total : 11874.67 46.39 0.00 0.00 5381.46 59.86 24399.56 00:14:59.634 00:14:59.634 real 0m11.040s 00:14:59.634 user 0m7.938s 00:14:59.634 sys 0m2.286s 00:14:59.634 05:01:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.634 ************************************ 00:14:59.634 END TEST xnvme_bdevperf 00:14:59.634 ************************************ 00:14:59.634 05:01:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:59.896 05:01:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:59.896 05:01:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.896 05:01:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.896 05:01:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.896 ************************************ 00:14:59.896 START TEST xnvme_fio_plugin 00:14:59.896 ************************************ 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.896 05:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.896 { 00:14:59.896 "subsystems": [ 00:14:59.896 { 00:14:59.896 "subsystem": "bdev", 00:14:59.896 "config": [ 00:14:59.896 { 00:14:59.896 "params": { 00:14:59.896 "io_mechanism": "io_uring", 00:14:59.896 "conserve_cpu": true, 00:14:59.896 "filename": "/dev/nvme0n1", 00:14:59.896 "name": "xnvme_bdev" 00:14:59.896 }, 00:14:59.896 "method": "bdev_xnvme_create" 00:14:59.896 }, 00:14:59.896 { 00:14:59.896 "method": "bdev_wait_for_examine" 00:14:59.896 } 00:14:59.896 ] 00:14:59.896 } 00:14:59.896 ] 00:14:59.896 } 00:14:59.896 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:59.896 fio-3.35 00:14:59.896 Starting 1 thread 00:15:06.510 00:15:06.510 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82386: Thu Nov 21 05:01:22 2024 00:15:06.510 read: IOPS=39.3k, BW=154MiB/s (161MB/s)(769MiB/5001msec) 00:15:06.510 slat (nsec): min=2712, max=93686, avg=3447.58, stdev=1784.33 00:15:06.510 clat (usec): min=735, max=8072, avg=1486.89, stdev=302.05 00:15:06.510 lat (usec): min=738, max=8075, avg=1490.33, stdev=302.41 00:15:06.510 clat percentiles (usec): 00:15:06.510 | 1.00th=[ 947], 5.00th=[ 1074], 10.00th=[ 1139], 20.00th=[ 1221], 00:15:06.510 | 30.00th=[ 1303], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1532], 00:15:06.510 | 70.00th=[ 1614], 80.00th=[ 1729], 90.00th=[ 1893], 95.00th=[ 2040], 00:15:06.510 | 99.00th=[ 2311], 99.50th=[ 2474], 99.90th=[ 2737], 99.95th=[ 2802], 00:15:06.510 | 99.99th=[ 2933] 00:15:06.510 bw ( KiB/s): min=140800, max=194560, per=100.00%, avg=157639.11, stdev=18156.96, samples=9 00:15:06.510 iops : min=35200, max=48640, avg=39409.78, stdev=4539.24, samples=9 00:15:06.510 lat (usec) : 750=0.01%, 1000=2.02% 00:15:06.510 lat (msec) : 2=92.05%, 4=5.92%, 10=0.01% 00:15:06.510 cpu : usr=52.38%, sys=43.82%, ctx=13, majf=0, minf=771 00:15:06.510 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:06.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.510 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:06.510 issued rwts: total=196789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.510 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:06.510 00:15:06.510 Run status group 0 (all jobs): 00:15:06.510 READ: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=769MiB (806MB), run=5001-5001msec 00:15:06.510 ----------------------------------------------------- 00:15:06.510 Suppressions used: 00:15:06.510 count bytes template 00:15:06.510 1 11 /usr/src/fio/parse.c 00:15:06.510 1 8 libtcmalloc_minimal.so 00:15:06.510 1 904 libcrypto.so 00:15:06.510 ----------------------------------------------------- 00:15:06.510 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:06.510 05:01:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.510 { 00:15:06.510 "subsystems": [ 00:15:06.510 { 00:15:06.510 "subsystem": "bdev", 00:15:06.510 "config": [ 00:15:06.510 { 00:15:06.510 "params": { 00:15:06.510 "io_mechanism": "io_uring", 00:15:06.510 "conserve_cpu": true, 00:15:06.510 "filename": "/dev/nvme0n1", 00:15:06.510 "name": "xnvme_bdev" 00:15:06.510 }, 00:15:06.510 "method": "bdev_xnvme_create" 00:15:06.510 }, 00:15:06.510 { 00:15:06.510 "method": "bdev_wait_for_examine" 00:15:06.510 } 00:15:06.510 ] 00:15:06.510 } 00:15:06.510 ] 00:15:06.510 } 00:15:06.510 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:06.510 fio-3.35 00:15:06.510 Starting 1 thread 00:15:11.804 00:15:11.804 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82468: Thu Nov 21 05:01:28 2024 00:15:11.804 write: IOPS=30.9k, BW=121MiB/s (127MB/s)(605MiB/5009msec); 0 zone resets 00:15:11.804 slat (usec): min=2, max=181, avg= 3.91, stdev= 2.39 00:15:11.804 clat (usec): min=68, max=22298, avg=1930.39, stdev=2193.12 00:15:11.804 lat (usec): min=71, max=22302, avg=1934.30, stdev=2193.20 00:15:11.804 clat percentiles (usec): 00:15:11.804 | 1.00th=[ 396], 5.00th=[ 750], 10.00th=[ 1074], 20.00th=[ 1254], 00:15:11.804 | 30.00th=[ 1352], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1582], 00:15:11.804 | 70.00th=[ 1663], 80.00th=[ 1778], 90.00th=[ 1958], 95.00th=[ 2409], 00:15:11.804 | 99.00th=[13173], 99.50th=[14091], 99.90th=[16319], 99.95th=[17695], 00:15:11.804 | 99.99th=[20317] 00:15:11.804 bw ( KiB/s): min=59792, max=164144, per=100.00%, avg=123851.60, stdev=44038.66, samples=10 00:15:11.804 iops : min=14948, max=41036, avg=30962.90, stdev=11009.67, samples=10 00:15:11.804 lat (usec) : 100=0.02%, 250=0.28%, 500=1.33%, 750=3.29%, 1000=3.68% 00:15:11.804 lat (msec) : 2=82.60%, 4=4.12%, 10=1.25%, 20=3.42%, 50=0.01% 00:15:11.804 cpu : usr=56.07%, sys=37.96%, ctx=17, majf=0, minf=771 00:15:11.804 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.7%, 16=21.5%, 32=55.3%, >=64=3.2% 00:15:11.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.804 complete : 0=0.0%, 4=97.8%, 8=0.3%, 16=0.3%, 32=0.2%, 64=1.3%, >=64=0.0% 00:15:11.804 issued rwts: total=0,154915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.804 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:11.804 00:15:11.804 Run status group 0 (all jobs): 00:15:11.804 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=605MiB (635MB), run=5009-5009msec 00:15:12.066 ----------------------------------------------------- 00:15:12.066 Suppressions used: 00:15:12.066 count bytes template 00:15:12.066 1 11 /usr/src/fio/parse.c 00:15:12.066 1 8 libtcmalloc_minimal.so 00:15:12.066 1 904 libcrypto.so 00:15:12.066 ----------------------------------------------------- 00:15:12.066 00:15:12.066 00:15:12.066 real 0m12.223s 00:15:12.066 user 0m6.698s 00:15:12.066 sys 0m4.708s 00:15:12.066 05:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.066 05:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:12.066 ************************************ 00:15:12.066 END TEST xnvme_fio_plugin 00:15:12.066 ************************************ 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:12.066 05:01:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:12.066 05:01:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:12.066 05:01:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.066 05:01:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:12.066 ************************************ 00:15:12.066 START TEST xnvme_rpc 00:15:12.066 ************************************ 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82543 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82543 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82543 ']' 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.066 05:01:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.328 [2024-11-21 05:01:28.797930] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:12.328 [2024-11-21 05:01:28.798862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82543 ] 00:15:12.328 [2024-11-21 05:01:28.970349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.328 [2024-11-21 05:01:29.011350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 xnvme_bdev 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.279 05:01:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82543 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82543 ']' 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82543 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82543 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.280 killing process with pid 82543 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82543' 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82543 00:15:13.280 05:01:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82543 00:15:13.853 00:15:13.853 real 0m1.631s 00:15:13.853 user 0m1.587s 00:15:13.853 sys 0m0.547s 00:15:13.853 05:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.853 ************************************ 00:15:13.853 END TEST xnvme_rpc 00:15:13.853 ************************************ 00:15:13.853 05:01:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.853 05:01:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:13.853 05:01:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:13.853 05:01:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.853 05:01:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.853 ************************************ 00:15:13.853 START TEST xnvme_bdevperf 00:15:13.853 ************************************ 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:13.853 05:01:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:13.853 { 00:15:13.853 "subsystems": [ 00:15:13.853 { 00:15:13.853 "subsystem": "bdev", 00:15:13.853 "config": [ 00:15:13.853 { 00:15:13.853 "params": { 00:15:13.853 "io_mechanism": "io_uring_cmd", 00:15:13.853 "conserve_cpu": false, 00:15:13.853 "filename": "/dev/ng0n1", 00:15:13.853 "name": "xnvme_bdev" 00:15:13.853 }, 00:15:13.853 "method": "bdev_xnvme_create" 00:15:13.853 }, 00:15:13.853 { 00:15:13.853 "method": "bdev_wait_for_examine" 00:15:13.853 } 00:15:13.853 ] 00:15:13.853 } 00:15:13.853 ] 00:15:13.853 } 00:15:13.853 [2024-11-21 05:01:30.475835] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:13.853 [2024-11-21 05:01:30.475988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82606 ] 00:15:14.114 [2024-11-21 05:01:30.637220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.114 [2024-11-21 05:01:30.674380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.115 Running I/O for 5 seconds... 00:15:16.446 35840.00 IOPS, 140.00 MiB/s [2024-11-21T05:01:34.122Z] 36000.00 IOPS, 140.62 MiB/s [2024-11-21T05:01:35.066Z] 37098.67 IOPS, 144.92 MiB/s [2024-11-21T05:01:36.009Z] 37903.50 IOPS, 148.06 MiB/s 00:15:19.275 Latency(us) 00:15:19.275 [2024-11-21T05:01:36.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.275 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:19.275 xnvme_bdev : 5.00 38329.80 149.73 0.00 0.00 1665.65 696.32 3629.69 00:15:19.275 [2024-11-21T05:01:36.009Z] =================================================================================================================== 00:15:19.275 [2024-11-21T05:01:36.009Z] Total : 38329.80 149.73 0.00 0.00 1665.65 696.32 3629.69 00:15:19.275 05:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:19.275 05:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:19.275 05:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:19.275 05:01:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:19.275 05:01:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:19.535 { 00:15:19.535 "subsystems": [ 00:15:19.535 { 00:15:19.535 "subsystem": "bdev", 00:15:19.535 "config": [ 00:15:19.535 { 00:15:19.535 "params": { 00:15:19.535 "io_mechanism": "io_uring_cmd", 00:15:19.535 "conserve_cpu": false, 00:15:19.535 "filename": "/dev/ng0n1", 00:15:19.535 "name": "xnvme_bdev" 00:15:19.535 }, 00:15:19.535 "method": "bdev_xnvme_create" 00:15:19.535 }, 00:15:19.535 { 00:15:19.535 "method": "bdev_wait_for_examine" 00:15:19.535 } 00:15:19.535 ] 00:15:19.535 } 00:15:19.535 ] 00:15:19.535 } 00:15:19.535 [2024-11-21 05:01:36.046198] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:19.535 [2024-11-21 05:01:36.046313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82669 ] 00:15:19.535 [2024-11-21 05:01:36.202972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.535 [2024-11-21 05:01:36.227454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.796 Running I/O for 5 seconds... 00:15:21.682 37942.00 IOPS, 148.21 MiB/s [2024-11-21T05:01:39.359Z] 28063.00 IOPS, 109.62 MiB/s [2024-11-21T05:01:40.744Z] 24955.67 IOPS, 97.48 MiB/s [2024-11-21T05:01:41.686Z] 23525.25 IOPS, 91.90 MiB/s [2024-11-21T05:01:41.686Z] 22759.60 IOPS, 88.90 MiB/s 00:15:24.952 Latency(us) 00:15:24.952 [2024-11-21T05:01:41.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.952 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:24.952 xnvme_bdev : 5.01 22741.54 88.83 0.00 0.00 2808.33 60.65 18249.26 00:15:24.952 [2024-11-21T05:01:41.686Z] =================================================================================================================== 00:15:24.952 [2024-11-21T05:01:41.686Z] Total : 22741.54 88.83 0.00 0.00 2808.33 60.65 18249.26 00:15:24.952 05:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:24.952 05:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:24.952 05:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:24.952 05:01:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:24.952 05:01:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:24.952 { 00:15:24.952 "subsystems": [ 00:15:24.952 { 00:15:24.952 "subsystem": "bdev", 00:15:24.952 "config": [ 00:15:24.952 { 00:15:24.952 "params": { 00:15:24.952 "io_mechanism": "io_uring_cmd", 00:15:24.952 "conserve_cpu": false, 00:15:24.952 "filename": "/dev/ng0n1", 00:15:24.952 "name": "xnvme_bdev" 00:15:24.952 }, 00:15:24.952 "method": "bdev_xnvme_create" 00:15:24.952 }, 00:15:24.952 { 00:15:24.952 "method": "bdev_wait_for_examine" 00:15:24.952 } 00:15:24.952 ] 00:15:24.952 } 00:15:24.952 ] 00:15:24.952 } 00:15:24.952 [2024-11-21 05:01:41.557804] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:24.952 [2024-11-21 05:01:41.557933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82738 ] 00:15:25.213 [2024-11-21 05:01:41.718796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.213 [2024-11-21 05:01:41.742861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.213 Running I/O for 5 seconds... 00:15:27.540 67328.00 IOPS, 263.00 MiB/s [2024-11-21T05:01:44.909Z] 71424.00 IOPS, 279.00 MiB/s [2024-11-21T05:01:45.850Z] 74986.67 IOPS, 292.92 MiB/s [2024-11-21T05:01:47.226Z] 78032.00 IOPS, 304.81 MiB/s 00:15:30.492 Latency(us) 00:15:30.492 [2024-11-21T05:01:47.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.492 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:30.492 xnvme_bdev : 5.00 81937.69 320.07 0.00 0.00 777.70 419.05 2709.66 00:15:30.492 [2024-11-21T05:01:47.226Z] =================================================================================================================== 00:15:30.492 [2024-11-21T05:01:47.226Z] Total : 81937.69 320.07 0.00 0.00 777.70 419.05 2709.66 00:15:30.492 05:01:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:30.492 05:01:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:30.492 05:01:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:30.492 05:01:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:30.492 05:01:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:30.492 { 00:15:30.492 "subsystems": [ 00:15:30.492 { 00:15:30.492 "subsystem": "bdev", 00:15:30.492 "config": [ 00:15:30.492 { 00:15:30.492 "params": { 00:15:30.492 "io_mechanism": "io_uring_cmd", 00:15:30.492 "conserve_cpu": false, 00:15:30.492 "filename": "/dev/ng0n1", 00:15:30.492 "name": "xnvme_bdev" 00:15:30.492 }, 00:15:30.492 "method": "bdev_xnvme_create" 00:15:30.492 }, 00:15:30.492 { 00:15:30.492 "method": "bdev_wait_for_examine" 00:15:30.492 } 00:15:30.492 ] 00:15:30.492 } 00:15:30.492 ] 00:15:30.492 } 00:15:30.492 [2024-11-21 05:01:47.056116] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:30.492 [2024-11-21 05:01:47.056245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82801 ] 00:15:30.492 [2024-11-21 05:01:47.210540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.752 [2024-11-21 05:01:47.232913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.752 Running I/O for 5 seconds... 00:15:32.630 53690.00 IOPS, 209.73 MiB/s [2024-11-21T05:01:50.753Z] 55879.00 IOPS, 218.28 MiB/s [2024-11-21T05:01:51.322Z] 57147.33 IOPS, 223.23 MiB/s [2024-11-21T05:01:52.703Z] 56644.25 IOPS, 221.27 MiB/s [2024-11-21T05:01:52.703Z] 55738.80 IOPS, 217.73 MiB/s 00:15:35.969 Latency(us) 00:15:35.969 [2024-11-21T05:01:52.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.970 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:35.970 xnvme_bdev : 5.00 55710.26 217.62 0.00 0.00 1144.70 149.66 12905.55 00:15:35.970 [2024-11-21T05:01:52.704Z] =================================================================================================================== 00:15:35.970 [2024-11-21T05:01:52.704Z] Total : 55710.26 217.62 0.00 0.00 1144.70 149.66 12905.55 00:15:35.970 00:15:35.970 real 0m22.083s 00:15:35.970 user 0m10.380s 00:15:35.970 sys 0m11.261s 00:15:35.970 05:01:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.970 ************************************ 00:15:35.970 END TEST xnvme_bdevperf 00:15:35.970 ************************************ 00:15:35.970 05:01:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:35.970 05:01:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:35.970 05:01:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:35.970 05:01:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.970 05:01:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.970 ************************************ 00:15:35.970 START TEST xnvme_fio_plugin 00:15:35.970 ************************************ 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.970 05:01:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:35.970 { 00:15:35.970 "subsystems": [ 00:15:35.970 { 00:15:35.970 "subsystem": "bdev", 00:15:35.970 "config": [ 00:15:35.970 { 00:15:35.970 "params": { 00:15:35.970 "io_mechanism": "io_uring_cmd", 00:15:35.970 "conserve_cpu": false, 00:15:35.970 "filename": "/dev/ng0n1", 00:15:35.970 "name": "xnvme_bdev" 00:15:35.970 }, 00:15:35.970 "method": "bdev_xnvme_create" 00:15:35.970 }, 00:15:35.970 { 00:15:35.970 "method": "bdev_wait_for_examine" 00:15:35.970 } 00:15:35.970 ] 00:15:35.970 } 00:15:35.970 ] 00:15:35.970 } 00:15:36.230 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:36.230 fio-3.35 00:15:36.230 Starting 1 thread 00:15:41.514 00:15:41.514 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82908: Thu Nov 21 05:01:58 2024 00:15:41.514 read: IOPS=40.4k, BW=158MiB/s (165MB/s)(789MiB/5001msec) 00:15:41.514 slat (nsec): min=2720, max=61373, avg=3369.12, stdev=1859.71 00:15:41.514 clat (usec): min=725, max=3325, avg=1449.26, stdev=275.42 00:15:41.514 lat (usec): min=727, max=3385, avg=1452.62, stdev=275.73 00:15:41.514 clat percentiles (usec): 00:15:41.514 | 1.00th=[ 898], 5.00th=[ 1020], 10.00th=[ 1090], 20.00th=[ 1205], 00:15:41.514 | 30.00th=[ 1303], 40.00th=[ 1369], 50.00th=[ 1450], 60.00th=[ 1516], 00:15:41.514 | 70.00th=[ 1582], 80.00th=[ 1663], 90.00th=[ 1778], 95.00th=[ 1893], 00:15:41.514 | 99.00th=[ 2212], 99.50th=[ 2311], 99.90th=[ 2573], 99.95th=[ 2868], 00:15:41.514 | 99.99th=[ 3130] 00:15:41.514 bw ( KiB/s): min=154624, max=166400, per=100.00%, avg=161621.33, stdev=4441.43, samples=9 00:15:41.514 iops : min=38656, max=41600, avg=40405.33, stdev=1110.36, samples=9 00:15:41.514 lat (usec) : 750=0.01%, 1000=4.13% 00:15:41.514 lat (msec) : 2=92.83%, 4=3.02% 00:15:41.514 cpu : usr=36.84%, sys=62.14%, ctx=7, majf=0, minf=771 00:15:41.514 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:41.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.514 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:41.514 issued rwts: total=201856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.514 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.514 00:15:41.514 Run status group 0 (all jobs): 00:15:41.514 READ: bw=158MiB/s (165MB/s), 158MiB/s-158MiB/s (165MB/s-165MB/s), io=789MiB (827MB), run=5001-5001msec 00:15:41.774 ----------------------------------------------------- 00:15:41.774 Suppressions used: 00:15:41.774 count bytes template 00:15:41.774 1 11 /usr/src/fio/parse.c 00:15:41.774 1 8 libtcmalloc_minimal.so 00:15:41.774 1 904 libcrypto.so 00:15:41.774 ----------------------------------------------------- 00:15:41.774 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.035 05:01:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:42.035 { 00:15:42.035 "subsystems": [ 00:15:42.035 { 00:15:42.035 "subsystem": "bdev", 00:15:42.035 "config": [ 00:15:42.035 { 00:15:42.035 "params": { 00:15:42.035 "io_mechanism": "io_uring_cmd", 00:15:42.035 "conserve_cpu": false, 00:15:42.035 "filename": "/dev/ng0n1", 00:15:42.035 "name": "xnvme_bdev" 00:15:42.035 }, 00:15:42.035 "method": "bdev_xnvme_create" 00:15:42.035 }, 00:15:42.035 { 00:15:42.035 "method": "bdev_wait_for_examine" 00:15:42.035 } 00:15:42.035 ] 00:15:42.035 } 00:15:42.035 ] 00:15:42.035 } 00:15:42.035 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:42.035 fio-3.35 00:15:42.035 Starting 1 thread 00:15:48.624 00:15:48.624 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82991: Thu Nov 21 05:02:04 2024 00:15:48.624 write: IOPS=39.1k, BW=153MiB/s (160MB/s)(764MiB/5001msec); 0 zone resets 00:15:48.624 slat (nsec): min=2792, max=56580, avg=3725.23, stdev=2011.16 00:15:48.624 clat (usec): min=252, max=7289, avg=1488.80, stdev=281.83 00:15:48.624 lat (usec): min=263, max=7292, avg=1492.53, stdev=282.23 00:15:48.624 clat percentiles (usec): 00:15:48.624 | 1.00th=[ 914], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1254], 00:15:48.624 | 30.00th=[ 1352], 40.00th=[ 1418], 50.00th=[ 1483], 60.00th=[ 1549], 00:15:48.624 | 70.00th=[ 1614], 80.00th=[ 1696], 90.00th=[ 1827], 95.00th=[ 1942], 00:15:48.624 | 99.00th=[ 2212], 99.50th=[ 2376], 99.90th=[ 2966], 99.95th=[ 3228], 00:15:48.624 | 99.99th=[ 4359] 00:15:48.624 bw ( KiB/s): min=152592, max=163576, per=100.00%, avg=156869.33, stdev=3698.30, samples=9 00:15:48.624 iops : min=38148, max=40894, avg=39217.33, stdev=924.57, samples=9 00:15:48.624 lat (usec) : 500=0.03%, 750=0.15%, 1000=2.70% 00:15:48.624 lat (msec) : 2=93.54%, 4=3.57%, 10=0.01% 00:15:48.624 cpu : usr=38.56%, sys=60.36%, ctx=11, majf=0, minf=771 00:15:48.624 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.3%, 16=24.8%, 32=50.6%, >=64=1.6% 00:15:48.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.624 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:48.624 issued rwts: total=0,195611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.624 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:48.624 00:15:48.624 Run status group 0 (all jobs): 00:15:48.624 WRITE: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=764MiB (801MB), run=5001-5001msec 00:15:48.624 ----------------------------------------------------- 00:15:48.624 Suppressions used: 00:15:48.624 count bytes template 00:15:48.624 1 11 /usr/src/fio/parse.c 00:15:48.624 1 8 libtcmalloc_minimal.so 00:15:48.624 1 904 libcrypto.so 00:15:48.624 ----------------------------------------------------- 00:15:48.624 00:15:48.624 00:15:48.624 real 0m11.948s 00:15:48.624 user 0m4.900s 00:15:48.624 sys 0m6.625s 00:15:48.624 ************************************ 00:15:48.624 END TEST xnvme_fio_plugin 00:15:48.624 ************************************ 00:15:48.624 05:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.624 05:02:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.624 05:02:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:48.624 05:02:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:48.624 05:02:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:48.624 05:02:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:48.624 05:02:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:48.624 05:02:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.624 05:02:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.624 ************************************ 00:15:48.624 START TEST xnvme_rpc 00:15:48.624 ************************************ 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83065 00:15:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83065 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83065 ']' 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.624 05:02:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:48.624 [2024-11-21 05:02:04.650404] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:48.624 [2024-11-21 05:02:04.650583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83065 ] 00:15:48.624 [2024-11-21 05:02:04.809197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.624 [2024-11-21 05:02:04.849898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.884 xnvme_bdev 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:48.884 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.885 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83065 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83065 ']' 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83065 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83065 00:15:49.146 killing process with pid 83065 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83065' 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83065 00:15:49.146 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83065 00:15:49.408 ************************************ 00:15:49.408 END TEST xnvme_rpc 00:15:49.408 ************************************ 00:15:49.408 00:15:49.408 real 0m1.417s 00:15:49.408 user 0m1.405s 00:15:49.408 sys 0m0.485s 00:15:49.408 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.408 05:02:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.408 05:02:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:49.408 05:02:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:49.408 05:02:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.408 05:02:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.408 ************************************ 00:15:49.408 START TEST xnvme_bdevperf 00:15:49.408 ************************************ 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:49.408 05:02:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:49.408 { 00:15:49.408 "subsystems": [ 00:15:49.408 { 00:15:49.408 "subsystem": "bdev", 00:15:49.408 "config": [ 00:15:49.408 { 00:15:49.408 "params": { 00:15:49.408 "io_mechanism": "io_uring_cmd", 00:15:49.408 "conserve_cpu": true, 00:15:49.408 "filename": "/dev/ng0n1", 00:15:49.408 "name": "xnvme_bdev" 00:15:49.408 }, 00:15:49.408 "method": "bdev_xnvme_create" 00:15:49.408 }, 00:15:49.408 { 00:15:49.408 "method": "bdev_wait_for_examine" 00:15:49.408 } 00:15:49.408 ] 00:15:49.408 } 00:15:49.408 ] 00:15:49.408 } 00:15:49.408 [2024-11-21 05:02:06.110761] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:49.408 [2024-11-21 05:02:06.110884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83124 ] 00:15:49.669 [2024-11-21 05:02:06.271295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.669 [2024-11-21 05:02:06.295477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.669 Running I/O for 5 seconds... 00:15:52.000 43388.00 IOPS, 169.48 MiB/s [2024-11-21T05:02:09.727Z] 43791.50 IOPS, 171.06 MiB/s [2024-11-21T05:02:10.675Z] 43871.00 IOPS, 171.37 MiB/s [2024-11-21T05:02:11.619Z] 43351.25 IOPS, 169.34 MiB/s 00:15:54.885 Latency(us) 00:15:54.885 [2024-11-21T05:02:11.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.885 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:54.885 xnvme_bdev : 5.00 43275.47 169.04 0.00 0.00 1475.23 649.06 9779.99 00:15:54.885 [2024-11-21T05:02:11.619Z] =================================================================================================================== 00:15:54.885 [2024-11-21T05:02:11.619Z] Total : 43275.47 169.04 0.00 0.00 1475.23 649.06 9779.99 00:15:54.885 05:02:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:54.885 05:02:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:54.885 05:02:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:54.885 05:02:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:54.885 05:02:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:54.885 { 00:15:54.885 "subsystems": [ 00:15:54.885 { 00:15:54.885 "subsystem": "bdev", 00:15:54.885 "config": [ 00:15:54.885 { 00:15:54.885 "params": { 00:15:54.885 "io_mechanism": "io_uring_cmd", 00:15:54.885 "conserve_cpu": true, 00:15:54.885 "filename": "/dev/ng0n1", 00:15:54.885 "name": "xnvme_bdev" 00:15:54.885 }, 00:15:54.885 "method": "bdev_xnvme_create" 00:15:54.885 }, 00:15:54.885 { 00:15:54.885 "method": "bdev_wait_for_examine" 00:15:54.885 } 00:15:54.885 ] 00:15:54.885 } 00:15:54.885 ] 00:15:54.885 } 00:15:55.146 [2024-11-21 05:02:11.620308] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:15:55.146 [2024-11-21 05:02:11.620548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83187 ] 00:15:55.146 [2024-11-21 05:02:11.782657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.146 [2024-11-21 05:02:11.806890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.407 Running I/O for 5 seconds... 00:15:57.293 43761.00 IOPS, 170.94 MiB/s [2024-11-21T05:02:14.971Z] 44622.50 IOPS, 174.31 MiB/s [2024-11-21T05:02:15.913Z] 43975.67 IOPS, 171.78 MiB/s [2024-11-21T05:02:17.299Z] 44097.75 IOPS, 172.26 MiB/s [2024-11-21T05:02:17.299Z] 44120.40 IOPS, 172.35 MiB/s 00:16:00.565 Latency(us) 00:16:00.565 [2024-11-21T05:02:17.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.565 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:00.565 xnvme_bdev : 5.01 44089.08 172.22 0.00 0.00 1447.48 338.71 5898.24 00:16:00.565 [2024-11-21T05:02:17.299Z] =================================================================================================================== 00:16:00.565 [2024-11-21T05:02:17.299Z] Total : 44089.08 172.22 0.00 0.00 1447.48 338.71 5898.24 00:16:00.565 05:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:00.565 05:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:00.565 05:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:00.565 05:02:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:00.565 05:02:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:00.565 { 00:16:00.565 "subsystems": [ 00:16:00.565 { 00:16:00.565 "subsystem": "bdev", 00:16:00.565 "config": [ 00:16:00.565 { 00:16:00.565 "params": { 00:16:00.565 "io_mechanism": "io_uring_cmd", 00:16:00.565 "conserve_cpu": true, 00:16:00.565 "filename": "/dev/ng0n1", 00:16:00.565 "name": "xnvme_bdev" 00:16:00.565 }, 00:16:00.565 "method": "bdev_xnvme_create" 00:16:00.565 }, 00:16:00.565 { 00:16:00.565 "method": "bdev_wait_for_examine" 00:16:00.565 } 00:16:00.565 ] 00:16:00.565 } 00:16:00.565 ] 00:16:00.565 } 00:16:00.565 [2024-11-21 05:02:17.131099] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:00.565 [2024-11-21 05:02:17.131213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83255 ] 00:16:00.565 [2024-11-21 05:02:17.290265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.827 [2024-11-21 05:02:17.314661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.827 Running I/O for 5 seconds... 00:16:02.710 83648.00 IOPS, 326.75 MiB/s [2024-11-21T05:02:20.829Z] 83712.00 IOPS, 327.00 MiB/s [2024-11-21T05:02:21.770Z] 83584.00 IOPS, 326.50 MiB/s [2024-11-21T05:02:22.706Z] 83584.00 IOPS, 326.50 MiB/s [2024-11-21T05:02:22.707Z] 86182.40 IOPS, 336.65 MiB/s 00:16:05.973 Latency(us) 00:16:05.973 [2024-11-21T05:02:22.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.973 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:05.973 xnvme_bdev : 5.00 86147.47 336.51 0.00 0.00 739.55 346.58 2709.66 00:16:05.973 [2024-11-21T05:02:22.707Z] =================================================================================================================== 00:16:05.973 [2024-11-21T05:02:22.707Z] Total : 86147.47 336.51 0.00 0.00 739.55 346.58 2709.66 00:16:05.973 05:02:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:05.973 05:02:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:05.973 05:02:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:05.973 05:02:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:05.973 05:02:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:05.973 { 00:16:05.973 "subsystems": [ 00:16:05.973 { 00:16:05.973 "subsystem": "bdev", 00:16:05.973 "config": [ 00:16:05.973 { 00:16:05.973 "params": { 00:16:05.973 "io_mechanism": "io_uring_cmd", 00:16:05.973 "conserve_cpu": true, 00:16:05.973 "filename": "/dev/ng0n1", 00:16:05.973 "name": "xnvme_bdev" 00:16:05.973 }, 00:16:05.973 "method": "bdev_xnvme_create" 00:16:05.973 }, 00:16:05.973 { 00:16:05.973 "method": "bdev_wait_for_examine" 00:16:05.973 } 00:16:05.973 ] 00:16:05.973 } 00:16:05.973 ] 00:16:05.973 } 00:16:05.973 [2024-11-21 05:02:22.628946] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:05.973 [2024-11-21 05:02:22.629074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83322 ] 00:16:06.234 [2024-11-21 05:02:22.790902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.234 [2024-11-21 05:02:22.824091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.234 Running I/O for 5 seconds... 00:16:08.545 50788.00 IOPS, 198.39 MiB/s [2024-11-21T05:02:26.219Z] 50127.00 IOPS, 195.81 MiB/s [2024-11-21T05:02:27.160Z] 47601.67 IOPS, 185.94 MiB/s [2024-11-21T05:02:28.099Z] 45854.00 IOPS, 179.12 MiB/s [2024-11-21T05:02:28.099Z] 44683.20 IOPS, 174.54 MiB/s 00:16:11.365 Latency(us) 00:16:11.365 [2024-11-21T05:02:28.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.365 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:11.365 xnvme_bdev : 5.00 44659.28 174.45 0.00 0.00 1427.80 83.50 23290.49 00:16:11.365 [2024-11-21T05:02:28.099Z] =================================================================================================================== 00:16:11.365 [2024-11-21T05:02:28.099Z] Total : 44659.28 174.45 0.00 0.00 1427.80 83.50 23290.49 00:16:11.684 00:16:11.684 real 0m22.188s 00:16:11.684 user 0m14.045s 00:16:11.684 sys 0m6.075s 00:16:11.684 05:02:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.684 05:02:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:11.684 ************************************ 00:16:11.684 END TEST xnvme_bdevperf 00:16:11.684 ************************************ 00:16:11.684 05:02:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:11.684 05:02:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:11.685 05:02:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.685 05:02:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:11.685 ************************************ 00:16:11.685 START TEST xnvme_fio_plugin 00:16:11.685 ************************************ 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:11.685 05:02:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:11.685 { 00:16:11.685 "subsystems": [ 00:16:11.685 { 00:16:11.685 "subsystem": "bdev", 00:16:11.685 "config": [ 00:16:11.685 { 00:16:11.685 "params": { 00:16:11.685 "io_mechanism": "io_uring_cmd", 00:16:11.685 "conserve_cpu": true, 00:16:11.685 "filename": "/dev/ng0n1", 00:16:11.685 "name": "xnvme_bdev" 00:16:11.685 }, 00:16:11.685 "method": "bdev_xnvme_create" 00:16:11.685 }, 00:16:11.685 { 00:16:11.685 "method": "bdev_wait_for_examine" 00:16:11.685 } 00:16:11.685 ] 00:16:11.685 } 00:16:11.685 ] 00:16:11.685 } 00:16:11.961 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:11.961 fio-3.35 00:16:11.961 Starting 1 thread 00:16:17.259 00:16:17.259 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83425: Thu Nov 21 05:02:33 2024 00:16:17.259 read: IOPS=35.8k, BW=140MiB/s (147MB/s)(700MiB/5002msec) 00:16:17.259 slat (usec): min=2, max=114, avg= 3.50, stdev= 2.01 00:16:17.259 clat (usec): min=841, max=6902, avg=1643.54, stdev=285.89 00:16:17.259 lat (usec): min=843, max=6904, avg=1647.04, stdev=286.19 00:16:17.259 clat percentiles (usec): 00:16:17.259 | 1.00th=[ 1106], 5.00th=[ 1237], 10.00th=[ 1319], 20.00th=[ 1401], 00:16:17.259 | 30.00th=[ 1483], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1680], 00:16:17.259 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2147], 00:16:17.259 | 99.00th=[ 2442], 99.50th=[ 2573], 99.90th=[ 2835], 99.95th=[ 3064], 00:16:17.259 | 99.99th=[ 3228] 00:16:17.259 bw ( KiB/s): min=136704, max=153088, per=100.00%, avg=143528.00, stdev=4567.94, samples=9 00:16:17.259 iops : min=34176, max=38272, avg=35882.00, stdev=1141.99, samples=9 00:16:17.259 lat (usec) : 1000=0.13% 00:16:17.259 lat (msec) : 2=88.78%, 4=11.09%, 10=0.01% 00:16:17.259 cpu : usr=56.59%, sys=40.21%, ctx=42, majf=0, minf=771 00:16:17.259 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:17.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.259 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:17.259 issued rwts: total=179192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:17.259 00:16:17.259 Run status group 0 (all jobs): 00:16:17.259 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=700MiB (734MB), run=5002-5002msec 00:16:17.833 ----------------------------------------------------- 00:16:17.833 Suppressions used: 00:16:17.833 count bytes template 00:16:17.833 1 11 /usr/src/fio/parse.c 00:16:17.833 1 8 libtcmalloc_minimal.so 00:16:17.833 1 904 libcrypto.so 00:16:17.833 ----------------------------------------------------- 00:16:17.833 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:17.833 05:02:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:17.833 { 00:16:17.833 "subsystems": [ 00:16:17.833 { 00:16:17.833 "subsystem": "bdev", 00:16:17.833 "config": [ 00:16:17.833 { 00:16:17.833 "params": { 00:16:17.833 "io_mechanism": "io_uring_cmd", 00:16:17.833 "conserve_cpu": true, 00:16:17.833 "filename": "/dev/ng0n1", 00:16:17.833 "name": "xnvme_bdev" 00:16:17.833 }, 00:16:17.833 "method": "bdev_xnvme_create" 00:16:17.833 }, 00:16:17.833 { 00:16:17.833 "method": "bdev_wait_for_examine" 00:16:17.833 } 00:16:17.833 ] 00:16:17.833 } 00:16:17.833 ] 00:16:17.833 } 00:16:18.096 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:18.096 fio-3.35 00:16:18.096 Starting 1 thread 00:16:24.681 00:16:24.681 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83510: Thu Nov 21 05:02:40 2024 00:16:24.681 write: IOPS=36.7k, BW=143MiB/s (150MB/s)(717MiB/5002msec); 0 zone resets 00:16:24.681 slat (usec): min=2, max=384, avg= 4.17, stdev= 2.78 00:16:24.681 clat (usec): min=458, max=8744, avg=1574.23, stdev=298.97 00:16:24.681 lat (usec): min=461, max=8747, avg=1578.40, stdev=299.59 00:16:24.682 clat percentiles (usec): 00:16:24.682 | 1.00th=[ 1074], 5.00th=[ 1188], 10.00th=[ 1254], 20.00th=[ 1336], 00:16:24.682 | 30.00th=[ 1418], 40.00th=[ 1483], 50.00th=[ 1532], 60.00th=[ 1598], 00:16:24.682 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2073], 00:16:24.682 | 99.00th=[ 2409], 99.50th=[ 2606], 99.90th=[ 3490], 99.95th=[ 4359], 00:16:24.682 | 99.99th=[ 6915] 00:16:24.682 bw ( KiB/s): min=141349, max=154952, per=99.98%, avg=146768.56, stdev=5202.23, samples=9 00:16:24.682 iops : min=35337, max=38738, avg=36692.11, stdev=1300.59, samples=9 00:16:24.682 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.23% 00:16:24.682 lat (msec) : 2=92.71%, 4=6.97%, 10=0.07% 00:16:24.682 cpu : usr=52.35%, sys=43.01%, ctx=11, majf=0, minf=771 00:16:24.682 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6% 00:16:24.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.682 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:24.682 issued rwts: total=0,183577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.682 00:16:24.682 Run status group 0 (all jobs): 00:16:24.682 WRITE: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=717MiB (752MB), run=5002-5002msec 00:16:24.682 ----------------------------------------------------- 00:16:24.682 Suppressions used: 00:16:24.682 count bytes template 00:16:24.682 1 11 /usr/src/fio/parse.c 00:16:24.682 1 8 libtcmalloc_minimal.so 00:16:24.682 1 904 libcrypto.so 00:16:24.682 ----------------------------------------------------- 00:16:24.682 00:16:24.682 00:16:24.682 real 0m12.287s 00:16:24.682 user 0m6.728s 00:16:24.682 sys 0m4.841s 00:16:24.682 ************************************ 00:16:24.682 END TEST xnvme_fio_plugin 00:16:24.682 ************************************ 00:16:24.682 05:02:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.682 05:02:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:24.682 Process with pid 83065 is not found 00:16:24.682 05:02:40 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 83065 00:16:24.682 05:02:40 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83065 ']' 00:16:24.682 05:02:40 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 83065 00:16:24.682 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83065) - No such process 00:16:24.682 05:02:40 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 83065 is not found' 00:16:24.682 ************************************ 00:16:24.682 END TEST nvme_xnvme 00:16:24.682 ************************************ 00:16:24.682 00:16:24.682 real 2m59.453s 00:16:24.682 user 1m32.554s 00:16:24.682 sys 1m12.519s 00:16:24.682 05:02:40 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.682 05:02:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.682 05:02:40 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:24.682 05:02:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.682 05:02:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.682 05:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:24.682 ************************************ 00:16:24.682 START TEST blockdev_xnvme 00:16:24.682 ************************************ 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:24.682 * Looking for test storage... 00:16:24.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.682 05:02:40 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.682 --rc genhtml_branch_coverage=1 00:16:24.682 --rc genhtml_function_coverage=1 00:16:24.682 --rc genhtml_legend=1 00:16:24.682 --rc geninfo_all_blocks=1 00:16:24.682 --rc geninfo_unexecuted_blocks=1 00:16:24.682 00:16:24.682 ' 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.682 --rc genhtml_branch_coverage=1 00:16:24.682 --rc genhtml_function_coverage=1 00:16:24.682 --rc genhtml_legend=1 00:16:24.682 --rc geninfo_all_blocks=1 00:16:24.682 --rc geninfo_unexecuted_blocks=1 00:16:24.682 00:16:24.682 ' 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.682 --rc genhtml_branch_coverage=1 00:16:24.682 --rc genhtml_function_coverage=1 00:16:24.682 --rc genhtml_legend=1 00:16:24.682 --rc geninfo_all_blocks=1 00:16:24.682 --rc geninfo_unexecuted_blocks=1 00:16:24.682 00:16:24.682 ' 00:16:24.682 05:02:40 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.682 --rc genhtml_branch_coverage=1 00:16:24.682 --rc genhtml_function_coverage=1 00:16:24.682 --rc genhtml_legend=1 00:16:24.682 --rc geninfo_all_blocks=1 00:16:24.683 --rc geninfo_unexecuted_blocks=1 00:16:24.683 00:16:24.683 ' 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:16:24.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83639 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83639 00:16:24.683 05:02:40 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83639 ']' 00:16:24.683 05:02:40 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.683 05:02:40 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.683 05:02:40 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.683 05:02:40 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.683 05:02:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.683 05:02:40 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:24.683 [2024-11-21 05:02:40.998460] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:24.683 [2024-11-21 05:02:40.999108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83639 ] 00:16:24.683 [2024-11-21 05:02:41.181363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.683 [2024-11-21 05:02:41.223242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.257 05:02:41 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.257 05:02:41 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:25.257 05:02:41 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:25.257 05:02:41 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:16:25.257 05:02:41 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:25.257 05:02:41 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:25.257 05:02:41 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:25.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:26.404 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:26.404 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:26.404 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:26.404 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:16:26.404 nvme0n1 00:16:26.404 nvme0n2 00:16:26.404 nvme0n3 00:16:26.404 nvme1n1 00:16:26.404 nvme2n1 00:16:26.404 nvme3n1 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.404 05:02:42 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.404 05:02:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.404 05:02:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.404 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.405 05:02:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:26.405 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "72129b5c-7450-45ab-b647-ce244c1de3a6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72129b5c-7450-45ab-b647-ce244c1de3a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "d28ed34e-b5c3-4c42-a299-04b16ebf9cd2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d28ed34e-b5c3-4c42-a299-04b16ebf9cd2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "fd9f60a0-4b6d-4b89-ba86-5824fa5ccf2f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd9f60a0-4b6d-4b89-ba86-5824fa5ccf2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "aad6fa84-f272-4636-be36-9884e4d063ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "aad6fa84-f272-4636-be36-9884e4d063ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b8d8e707-860b-4804-9ff8-307904b1d8ac"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b8d8e707-860b-4804-9ff8-307904b1d8ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cf3dcdba-b96a-4fa0-b5c1-ddf791d04cf8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cf3dcdba-b96a-4fa0-b5c1-ddf791d04cf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:26.668 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:26.668 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:16:26.668 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:26.668 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 83639 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83639 ']' 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83639 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83639 00:16:26.668 killing process with pid 83639 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83639' 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83639 00:16:26.668 05:02:43 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83639 00:16:26.932 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:26.932 05:02:43 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:26.932 05:02:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:26.932 05:02:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.932 05:02:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.194 ************************************ 00:16:27.194 START TEST bdev_hello_world 00:16:27.194 ************************************ 00:16:27.194 05:02:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:27.194 [2024-11-21 05:02:43.745273] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:27.194 [2024-11-21 05:02:43.745660] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83906 ] 00:16:27.194 [2024-11-21 05:02:43.911996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.456 [2024-11-21 05:02:43.953862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.718 [2024-11-21 05:02:44.216339] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:27.718 [2024-11-21 05:02:44.216701] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:27.718 [2024-11-21 05:02:44.216741] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:27.718 [2024-11-21 05:02:44.219277] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:27.718 [2024-11-21 05:02:44.221062] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:27.718 [2024-11-21 05:02:44.221126] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:27.718 [2024-11-21 05:02:44.221704] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:27.718 00:16:27.718 [2024-11-21 05:02:44.221752] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:27.981 ************************************ 00:16:27.981 END TEST bdev_hello_world 00:16:27.981 ************************************ 00:16:27.981 00:16:27.981 real 0m0.812s 00:16:27.981 user 0m0.402s 00:16:27.981 sys 0m0.264s 00:16:27.981 05:02:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.981 05:02:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:27.981 05:02:44 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:27.981 05:02:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.981 05:02:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.981 05:02:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.981 ************************************ 00:16:27.981 START TEST bdev_bounds 00:16:27.981 ************************************ 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:27.981 Process bdevio pid: 83932 00:16:27.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83932 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83932' 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83932 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83932 ']' 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.981 05:02:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:27.981 [2024-11-21 05:02:44.628023] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:27.981 [2024-11-21 05:02:44.628192] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83932 ] 00:16:28.242 [2024-11-21 05:02:44.792969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.242 [2024-11-21 05:02:44.838319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.242 [2024-11-21 05:02:44.838699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.242 [2024-11-21 05:02:44.838713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.815 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.815 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:28.815 05:02:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:29.078 I/O targets: 00:16:29.078 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:29.078 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:29.078 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:29.078 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:29.078 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:29.078 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:29.078 00:16:29.078 00:16:29.078 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.078 http://cunit.sourceforge.net/ 00:16:29.078 00:16:29.078 00:16:29.078 Suite: bdevio tests on: nvme3n1 00:16:29.078 Test: blockdev write read block ...passed 00:16:29.078 Test: blockdev write zeroes read block ...passed 00:16:29.078 Test: blockdev write zeroes read no split ...passed 00:16:29.078 Test: blockdev write zeroes read split ...passed 00:16:29.078 Test: blockdev write zeroes read split partial ...passed 00:16:29.078 Test: blockdev reset ...passed 00:16:29.078 Test: blockdev write read 8 blocks ...passed 00:16:29.078 Test: blockdev write read size > 128k ...passed 00:16:29.078 Test: blockdev write read invalid size ...passed 00:16:29.078 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.078 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.078 Test: blockdev write read max offset ...passed 00:16:29.078 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.078 Test: blockdev writev readv 8 blocks ...passed 00:16:29.078 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.078 Test: blockdev writev readv block ...passed 00:16:29.078 Test: blockdev writev readv size > 128k ...passed 00:16:29.078 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.078 Test: blockdev comparev and writev ...passed 00:16:29.078 Test: blockdev nvme passthru rw ...passed 00:16:29.078 Test: blockdev nvme passthru vendor specific ...passed 00:16:29.078 Test: blockdev nvme admin passthru ...passed 00:16:29.078 Test: blockdev copy ...passed 00:16:29.078 Suite: bdevio tests on: nvme2n1 00:16:29.078 Test: blockdev write read block ...passed 00:16:29.078 Test: blockdev write zeroes read block ...passed 00:16:29.078 Test: blockdev write zeroes read no split ...passed 00:16:29.078 Test: blockdev write zeroes read split ...passed 00:16:29.078 Test: blockdev write zeroes read split partial ...passed 00:16:29.078 Test: blockdev reset ...passed 00:16:29.078 Test: blockdev write read 8 blocks ...passed 00:16:29.078 Test: blockdev write read size > 128k ...passed 00:16:29.078 Test: blockdev write read invalid size ...passed 00:16:29.078 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.078 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.078 Test: blockdev write read max offset ...passed 00:16:29.078 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.078 Test: blockdev writev readv 8 blocks ...passed 00:16:29.078 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.078 Test: blockdev writev readv block ...passed 00:16:29.078 Test: blockdev writev readv size > 128k ...passed 00:16:29.078 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.078 Test: blockdev comparev and writev ...passed 00:16:29.078 Test: blockdev nvme passthru rw ...passed 00:16:29.078 Test: blockdev nvme passthru vendor specific ...passed 00:16:29.078 Test: blockdev nvme admin passthru ...passed 00:16:29.078 Test: blockdev copy ...passed 00:16:29.078 Suite: bdevio tests on: nvme1n1 00:16:29.078 Test: blockdev write read block ...passed 00:16:29.078 Test: blockdev write zeroes read block ...passed 00:16:29.078 Test: blockdev write zeroes read no split ...passed 00:16:29.078 Test: blockdev write zeroes read split ...passed 00:16:29.078 Test: blockdev write zeroes read split partial ...passed 00:16:29.078 Test: blockdev reset ...passed 00:16:29.078 Test: blockdev write read 8 blocks ...passed 00:16:29.078 Test: blockdev write read size > 128k ...passed 00:16:29.078 Test: blockdev write read invalid size ...passed 00:16:29.078 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.078 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.078 Test: blockdev write read max offset ...passed 00:16:29.078 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.078 Test: blockdev writev readv 8 blocks ...passed 00:16:29.078 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.078 Test: blockdev writev readv block ...passed 00:16:29.078 Test: blockdev writev readv size > 128k ...passed 00:16:29.078 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.078 Test: blockdev comparev and writev ...passed 00:16:29.078 Test: blockdev nvme passthru rw ...passed 00:16:29.078 Test: blockdev nvme passthru vendor specific ...passed 00:16:29.078 Test: blockdev nvme admin passthru ...passed 00:16:29.078 Test: blockdev copy ...passed 00:16:29.078 Suite: bdevio tests on: nvme0n3 00:16:29.078 Test: blockdev write read block ...passed 00:16:29.078 Test: blockdev write zeroes read block ...passed 00:16:29.078 Test: blockdev write zeroes read no split ...passed 00:16:29.078 Test: blockdev write zeroes read split ...passed 00:16:29.078 Test: blockdev write zeroes read split partial ...passed 00:16:29.078 Test: blockdev reset ...passed 00:16:29.078 Test: blockdev write read 8 blocks ...passed 00:16:29.078 Test: blockdev write read size > 128k ...passed 00:16:29.078 Test: blockdev write read invalid size ...passed 00:16:29.078 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.078 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.078 Test: blockdev write read max offset ...passed 00:16:29.078 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.078 Test: blockdev writev readv 8 blocks ...passed 00:16:29.078 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.078 Test: blockdev writev readv block ...passed 00:16:29.078 Test: blockdev writev readv size > 128k ...passed 00:16:29.078 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.078 Test: blockdev comparev and writev ...passed 00:16:29.078 Test: blockdev nvme passthru rw ...passed 00:16:29.078 Test: blockdev nvme passthru vendor specific ...passed 00:16:29.078 Test: blockdev nvme admin passthru ...passed 00:16:29.078 Test: blockdev copy ...passed 00:16:29.078 Suite: bdevio tests on: nvme0n2 00:16:29.078 Test: blockdev write read block ...passed 00:16:29.078 Test: blockdev write zeroes read block ...passed 00:16:29.078 Test: blockdev write zeroes read no split ...passed 00:16:29.078 Test: blockdev write zeroes read split ...passed 00:16:29.078 Test: blockdev write zeroes read split partial ...passed 00:16:29.078 Test: blockdev reset ...passed 00:16:29.078 Test: blockdev write read 8 blocks ...passed 00:16:29.078 Test: blockdev write read size > 128k ...passed 00:16:29.078 Test: blockdev write read invalid size ...passed 00:16:29.078 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.078 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.078 Test: blockdev write read max offset ...passed 00:16:29.078 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.340 Test: blockdev writev readv 8 blocks ...passed 00:16:29.340 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.340 Test: blockdev writev readv block ...passed 00:16:29.340 Test: blockdev writev readv size > 128k ...passed 00:16:29.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.340 Test: blockdev comparev and writev ...passed 00:16:29.340 Test: blockdev nvme passthru rw ...passed 00:16:29.340 Test: blockdev nvme passthru vendor specific ...passed 00:16:29.340 Test: blockdev nvme admin passthru ...passed 00:16:29.340 Test: blockdev copy ...passed 00:16:29.340 Suite: bdevio tests on: nvme0n1 00:16:29.340 Test: blockdev write read block ...passed 00:16:29.340 Test: blockdev write zeroes read block ...passed 00:16:29.340 Test: blockdev write zeroes read no split ...passed 00:16:29.340 Test: blockdev write zeroes read split ...passed 00:16:29.340 Test: blockdev write zeroes read split partial ...passed 00:16:29.340 Test: blockdev reset ...passed 00:16:29.340 Test: blockdev write read 8 blocks ...passed 00:16:29.340 Test: blockdev write read size > 128k ...passed 00:16:29.340 Test: blockdev write read invalid size ...passed 00:16:29.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.340 Test: blockdev write read max offset ...passed 00:16:29.340 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.340 Test: blockdev writev readv 8 blocks ...passed 00:16:29.340 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.340 Test: blockdev writev readv block ...passed 00:16:29.340 Test: blockdev writev readv size > 128k ...passed 00:16:29.340 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.340 Test: blockdev comparev and writev ...passed 00:16:29.340 Test: blockdev nvme passthru rw ...passed 00:16:29.340 Test: blockdev nvme passthru vendor specific ...passed 00:16:29.340 Test: blockdev nvme admin passthru ...passed 00:16:29.340 Test: blockdev copy ...passed 00:16:29.340 00:16:29.340 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.340 suites 6 6 n/a 0 0 00:16:29.340 tests 138 138 138 0 0 00:16:29.340 asserts 780 780 780 0 n/a 00:16:29.340 00:16:29.340 Elapsed time = 0.650 seconds 00:16:29.340 0 00:16:29.340 05:02:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83932 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83932 ']' 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83932 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83932 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83932' 00:16:29.341 killing process with pid 83932 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83932 00:16:29.341 05:02:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83932 00:16:29.603 05:02:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:29.603 00:16:29.603 real 0m1.662s 00:16:29.603 user 0m3.915s 00:16:29.603 sys 0m0.431s 00:16:29.603 05:02:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.603 05:02:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:29.603 ************************************ 00:16:29.603 END TEST bdev_bounds 00:16:29.603 ************************************ 00:16:29.603 05:02:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:29.603 05:02:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:29.603 05:02:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.603 05:02:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.603 ************************************ 00:16:29.603 START TEST bdev_nbd 00:16:29.603 ************************************ 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83986 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83986 /var/tmp/spdk-nbd.sock 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83986 ']' 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.603 05:02:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:29.864 [2024-11-21 05:02:46.367763] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:29.864 [2024-11-21 05:02:46.368145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.864 [2024-11-21 05:02:46.533789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.864 [2024-11-21 05:02:46.575410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.807 1+0 records in 00:16:30.807 1+0 records out 00:16:30.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118401 s, 3.5 MB/s 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:30.807 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.070 1+0 records in 00:16:31.070 1+0 records out 00:16:31.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00443445 s, 924 kB/s 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:31.070 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.332 05:02:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.332 1+0 records in 00:16:31.332 1+0 records out 00:16:31.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128948 s, 3.2 MB/s 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:31.332 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.595 1+0 records in 00:16:31.595 1+0 records out 00:16:31.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122697 s, 3.3 MB/s 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:31.595 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.857 1+0 records in 00:16:31.857 1+0 records out 00:16:31.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108291 s, 3.8 MB/s 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:31.857 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.119 1+0 records in 00:16:32.119 1+0 records out 00:16:32.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106723 s, 3.8 MB/s 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:32.119 05:02:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:32.380 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:32.380 { 00:16:32.380 "nbd_device": "/dev/nbd0", 00:16:32.380 "bdev_name": "nvme0n1" 00:16:32.380 }, 00:16:32.380 { 00:16:32.380 "nbd_device": "/dev/nbd1", 00:16:32.380 "bdev_name": "nvme0n2" 00:16:32.380 }, 00:16:32.380 { 00:16:32.381 "nbd_device": "/dev/nbd2", 00:16:32.381 "bdev_name": "nvme0n3" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd3", 00:16:32.381 "bdev_name": "nvme1n1" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd4", 00:16:32.381 "bdev_name": "nvme2n1" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd5", 00:16:32.381 "bdev_name": "nvme3n1" 00:16:32.381 } 00:16:32.381 ]' 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd0", 00:16:32.381 "bdev_name": "nvme0n1" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd1", 00:16:32.381 "bdev_name": "nvme0n2" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd2", 00:16:32.381 "bdev_name": "nvme0n3" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd3", 00:16:32.381 "bdev_name": "nvme1n1" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd4", 00:16:32.381 "bdev_name": "nvme2n1" 00:16:32.381 }, 00:16:32.381 { 00:16:32.381 "nbd_device": "/dev/nbd5", 00:16:32.381 "bdev_name": "nvme3n1" 00:16:32.381 } 00:16:32.381 ]' 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.381 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.641 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.902 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.163 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.424 05:02:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.687 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:33.950 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:34.212 /dev/nbd0 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.212 1+0 records in 00:16:34.212 1+0 records out 00:16:34.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105341 s, 3.9 MB/s 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:34.212 05:02:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:34.474 /dev/nbd1 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.474 1+0 records in 00:16:34.474 1+0 records out 00:16:34.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110843 s, 3.7 MB/s 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:34.474 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:34.736 /dev/nbd10 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.736 1+0 records in 00:16:34.736 1+0 records out 00:16:34.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935625 s, 4.4 MB/s 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:34.736 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:34.998 /dev/nbd11 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.998 1+0 records in 00:16:34.998 1+0 records out 00:16:34.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149335 s, 2.7 MB/s 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:34.998 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:35.259 /dev/nbd12 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.259 1+0 records in 00:16:35.259 1+0 records out 00:16:35.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130491 s, 3.1 MB/s 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:35.259 05:02:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:35.520 /dev/nbd13 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.520 1+0 records in 00:16:35.520 1+0 records out 00:16:35.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119774 s, 3.4 MB/s 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.520 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:35.521 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:35.521 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:35.521 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd0", 00:16:35.783 "bdev_name": "nvme0n1" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd1", 00:16:35.783 "bdev_name": "nvme0n2" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd10", 00:16:35.783 "bdev_name": "nvme0n3" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd11", 00:16:35.783 "bdev_name": "nvme1n1" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd12", 00:16:35.783 "bdev_name": "nvme2n1" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd13", 00:16:35.783 "bdev_name": "nvme3n1" 00:16:35.783 } 00:16:35.783 ]' 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd0", 00:16:35.783 "bdev_name": "nvme0n1" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd1", 00:16:35.783 "bdev_name": "nvme0n2" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd10", 00:16:35.783 "bdev_name": "nvme0n3" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd11", 00:16:35.783 "bdev_name": "nvme1n1" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd12", 00:16:35.783 "bdev_name": "nvme2n1" 00:16:35.783 }, 00:16:35.783 { 00:16:35.783 "nbd_device": "/dev/nbd13", 00:16:35.783 "bdev_name": "nvme3n1" 00:16:35.783 } 00:16:35.783 ]' 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:35.783 /dev/nbd1 00:16:35.783 /dev/nbd10 00:16:35.783 /dev/nbd11 00:16:35.783 /dev/nbd12 00:16:35.783 /dev/nbd13' 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:35.783 /dev/nbd1 00:16:35.783 /dev/nbd10 00:16:35.783 /dev/nbd11 00:16:35.783 /dev/nbd12 00:16:35.783 /dev/nbd13' 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:35.783 256+0 records in 00:16:35.783 256+0 records out 00:16:35.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00832962 s, 126 MB/s 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:35.783 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:36.045 256+0 records in 00:16:36.045 256+0 records out 00:16:36.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239062 s, 4.4 MB/s 00:16:36.045 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:36.045 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:36.306 256+0 records in 00:16:36.306 256+0 records out 00:16:36.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238603 s, 4.4 MB/s 00:16:36.306 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:36.306 05:02:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:36.568 256+0 records in 00:16:36.568 256+0 records out 00:16:36.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198337 s, 5.3 MB/s 00:16:36.568 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:36.568 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:36.829 256+0 records in 00:16:36.829 256+0 records out 00:16:36.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23729 s, 4.4 MB/s 00:16:36.829 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:36.829 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:37.092 256+0 records in 00:16:37.092 256+0 records out 00:16:37.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.306048 s, 3.4 MB/s 00:16:37.092 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:37.092 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:37.353 256+0 records in 00:16:37.353 256+0 records out 00:16:37.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251545 s, 4.2 MB/s 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.353 05:02:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:37.353 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.353 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:37.353 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.353 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.354 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.616 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.878 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.138 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.405 05:02:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.405 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.665 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:38.927 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:39.189 malloc_lvol_verify 00:16:39.189 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:39.189 b01a1095-9fec-4790-ab8f-ff843c0f8127 00:16:39.451 05:02:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:39.451 0a6da76c-df47-4ea5-9ed7-3c9a30e45739 00:16:39.451 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:39.713 /dev/nbd0 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:39.713 mke2fs 1.47.0 (5-Feb-2023) 00:16:39.713 Discarding device blocks: 0/4096 done 00:16:39.713 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:39.713 00:16:39.713 Allocating group tables: 0/1 done 00:16:39.713 Writing inode tables: 0/1 done 00:16:39.713 Creating journal (1024 blocks): done 00:16:39.713 Writing superblocks and filesystem accounting information: 0/1 done 00:16:39.713 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:39.713 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83986 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83986 ']' 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83986 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83986 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.976 killing process with pid 83986 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83986' 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83986 00:16:39.976 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83986 00:16:40.237 05:02:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:40.237 00:16:40.237 real 0m10.606s 00:16:40.237 user 0m14.198s 00:16:40.237 sys 0m3.986s 00:16:40.237 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.237 ************************************ 00:16:40.237 END TEST bdev_nbd 00:16:40.237 ************************************ 00:16:40.237 05:02:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:40.237 05:02:56 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:40.237 05:02:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:40.237 05:02:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:40.237 05:02:56 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:40.237 05:02:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.237 05:02:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.237 05:02:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.237 ************************************ 00:16:40.237 START TEST bdev_fio 00:16:40.238 ************************************ 00:16:40.238 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:40.238 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:40.499 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:40.499 05:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:40.499 ************************************ 00:16:40.499 START TEST bdev_fio_rw_verify 00:16:40.499 ************************************ 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:40.499 05:02:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:40.774 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:40.774 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:40.774 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:40.774 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:40.774 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:40.774 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:40.774 fio-3.35 00:16:40.774 Starting 6 threads 00:16:53.081 00:16:53.081 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=84388: Thu Nov 21 05:03:07 2024 00:16:53.081 read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(616MiB/10003msec) 00:16:53.082 slat (usec): min=2, max=5270, avg= 6.02, stdev=20.22 00:16:53.082 clat (usec): min=83, max=8629, avg=1187.58, stdev=771.19 00:16:53.082 lat (usec): min=87, max=8644, avg=1193.61, stdev=771.90 00:16:53.082 clat percentiles (usec): 00:16:53.082 | 50.000th=[ 1057], 99.000th=[ 3523], 99.900th=[ 5080], 99.990th=[ 6980], 00:16:53.082 | 99.999th=[ 8586] 00:16:53.082 write: IOPS=16.1k, BW=62.8MiB/s (65.9MB/s)(628MiB/10003msec); 0 zone resets 00:16:53.082 slat (usec): min=12, max=4837, avg=40.74, stdev=144.37 00:16:53.082 clat (usec): min=69, max=11762, avg=1498.18, stdev=882.56 00:16:53.082 lat (usec): min=84, max=11832, avg=1538.92, stdev=897.63 00:16:53.082 clat percentiles (usec): 00:16:53.082 | 50.000th=[ 1352], 99.000th=[ 4228], 99.900th=[ 5669], 99.990th=[ 7373], 00:16:53.082 | 99.999th=[11731] 00:16:53.082 bw ( KiB/s): min=49044, max=121875, per=100.00%, avg=65007.37, stdev=3319.77, samples=114 00:16:53.082 iops : min=12258, max=30468, avg=16250.68, stdev=829.97, samples=114 00:16:53.082 lat (usec) : 100=0.02%, 250=3.79%, 500=10.63%, 750=13.05%, 1000=12.49% 00:16:53.082 lat (msec) : 2=40.96%, 4=18.12%, 10=0.95%, 20=0.01% 00:16:53.082 cpu : usr=41.44%, sys=32.61%, ctx=5743, majf=0, minf=15659 00:16:53.082 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:53.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.082 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.082 issued rwts: total=157722,160877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.082 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:53.082 00:16:53.082 Run status group 0 (all jobs): 00:16:53.082 READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=616MiB (646MB), run=10003-10003msec 00:16:53.082 WRITE: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=628MiB (659MB), run=10003-10003msec 00:16:53.082 ----------------------------------------------------- 00:16:53.082 Suppressions used: 00:16:53.082 count bytes template 00:16:53.082 6 48 /usr/src/fio/parse.c 00:16:53.082 3053 293088 /usr/src/fio/iolog.c 00:16:53.082 1 8 libtcmalloc_minimal.so 00:16:53.082 1 904 libcrypto.so 00:16:53.082 ----------------------------------------------------- 00:16:53.082 00:16:53.082 00:16:53.082 real 0m11.243s 00:16:53.082 user 0m25.665s 00:16:53.082 sys 0m19.919s 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:53.082 ************************************ 00:16:53.082 END TEST bdev_fio_rw_verify 00:16:53.082 ************************************ 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "72129b5c-7450-45ab-b647-ce244c1de3a6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72129b5c-7450-45ab-b647-ce244c1de3a6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "d28ed34e-b5c3-4c42-a299-04b16ebf9cd2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d28ed34e-b5c3-4c42-a299-04b16ebf9cd2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "fd9f60a0-4b6d-4b89-ba86-5824fa5ccf2f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd9f60a0-4b6d-4b89-ba86-5824fa5ccf2f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "aad6fa84-f272-4636-be36-9884e4d063ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "aad6fa84-f272-4636-be36-9884e4d063ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b8d8e707-860b-4804-9ff8-307904b1d8ac"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b8d8e707-860b-4804-9ff8-307904b1d8ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "cf3dcdba-b96a-4fa0-b5c1-ddf791d04cf8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "cf3dcdba-b96a-4fa0-b5c1-ddf791d04cf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:53.082 /home/vagrant/spdk_repo/spdk 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:53.082 ************************************ 00:16:53.082 END TEST bdev_fio 00:16:53.082 ************************************ 00:16:53.082 00:16:53.082 real 0m11.410s 00:16:53.082 user 0m25.732s 00:16:53.082 sys 0m19.997s 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.082 05:03:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:53.083 05:03:08 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:53.083 05:03:08 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:53.083 05:03:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:53.083 05:03:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.083 05:03:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.083 ************************************ 00:16:53.083 START TEST bdev_verify 00:16:53.083 ************************************ 00:16:53.083 05:03:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:53.083 [2024-11-21 05:03:08.505185] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:53.083 [2024-11-21 05:03:08.505347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84557 ] 00:16:53.083 [2024-11-21 05:03:08.672110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:53.083 [2024-11-21 05:03:08.714113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.083 [2024-11-21 05:03:08.714180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.083 Running I/O for 5 seconds... 00:16:54.597 25344.00 IOPS, 99.00 MiB/s [2024-11-21T05:03:12.274Z] 24544.00 IOPS, 95.88 MiB/s [2024-11-21T05:03:13.221Z] 24149.33 IOPS, 94.33 MiB/s [2024-11-21T05:03:14.164Z] 24112.00 IOPS, 94.19 MiB/s [2024-11-21T05:03:14.164Z] 23987.20 IOPS, 93.70 MiB/s 00:16:57.430 Latency(us) 00:16:57.430 [2024-11-21T05:03:14.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.430 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x0 length 0x80000 00:16:57.430 nvme0n1 : 5.03 1831.26 7.15 0.00 0.00 69767.09 9830.40 67754.14 00:16:57.430 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x80000 length 0x80000 00:16:57.430 nvme0n1 : 5.05 2004.27 7.83 0.00 0.00 63741.99 10032.05 79853.10 00:16:57.430 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x0 length 0x80000 00:16:57.430 nvme0n2 : 5.08 1813.85 7.09 0.00 0.00 70308.51 12250.19 61301.37 00:16:57.430 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x80000 length 0x80000 00:16:57.430 nvme0n2 : 5.06 1998.03 7.80 0.00 0.00 63829.11 11494.01 72593.72 00:16:57.430 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x0 length 0x80000 00:16:57.430 nvme0n3 : 5.07 1816.28 7.09 0.00 0.00 70097.39 9981.64 65334.35 00:16:57.430 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x80000 length 0x80000 00:16:57.430 nvme0n3 : 5.05 2003.65 7.83 0.00 0.00 63541.41 9074.22 65737.65 00:16:57.430 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x0 length 0x20000 00:16:57.430 nvme1n1 : 5.08 1815.01 7.09 0.00 0.00 70024.25 13107.20 62511.26 00:16:57.430 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x20000 length 0x20000 00:16:57.430 nvme1n1 : 5.05 2003.02 7.82 0.00 0.00 63450.93 6805.66 60494.77 00:16:57.430 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x0 length 0xbd0bd 00:16:57.430 nvme2n1 : 5.08 2369.14 9.25 0.00 0.00 53382.53 5696.59 58881.58 00:16:57.430 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:57.430 nvme2n1 : 5.07 2675.49 10.45 0.00 0.00 47389.33 5116.85 56461.78 00:16:57.430 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0x0 length 0xa0000 00:16:57.430 nvme3n1 : 5.09 1761.16 6.88 0.00 0.00 71779.19 7259.37 79853.10 00:16:57.430 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.430 Verification LBA range: start 0xa0000 length 0xa0000 00:16:57.431 nvme3n1 : 5.07 1690.63 6.60 0.00 0.00 74827.80 4108.60 102437.81 00:16:57.431 [2024-11-21T05:03:14.165Z] =================================================================================================================== 00:16:57.431 [2024-11-21T05:03:14.165Z] Total : 23781.81 92.90 0.00 0.00 64148.72 4108.60 102437.81 00:16:57.692 00:16:57.692 real 0m5.962s 00:16:57.692 user 0m9.358s 00:16:57.692 sys 0m1.623s 00:16:57.692 05:03:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.693 ************************************ 00:16:57.693 END TEST bdev_verify 00:16:57.693 ************************************ 00:16:57.693 05:03:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:57.954 05:03:14 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:57.954 05:03:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:57.954 05:03:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.954 05:03:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.954 ************************************ 00:16:57.954 START TEST bdev_verify_big_io 00:16:57.954 ************************************ 00:16:57.954 05:03:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:57.954 [2024-11-21 05:03:14.538321] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:16:57.954 [2024-11-21 05:03:14.538489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84646 ] 00:16:58.216 [2024-11-21 05:03:14.705991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:58.216 [2024-11-21 05:03:14.748225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.216 [2024-11-21 05:03:14.748267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.478 Running I/O for 5 seconds... 00:17:04.606 1672.00 IOPS, 104.50 MiB/s [2024-11-21T05:03:21.340Z] 3563.00 IOPS, 222.69 MiB/s 00:17:04.606 Latency(us) 00:17:04.606 [2024-11-21T05:03:21.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.606 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x0 length 0x8000 00:17:04.606 nvme0n1 : 5.61 125.39 7.84 0.00 0.00 997249.97 43959.53 1187310.67 00:17:04.606 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x8000 length 0x8000 00:17:04.606 nvme0n1 : 5.70 134.79 8.42 0.00 0.00 934184.24 14014.62 942105.21 00:17:04.606 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x0 length 0x8000 00:17:04.606 nvme0n2 : 5.68 121.02 7.56 0.00 0.00 1002057.75 66544.25 1122782.92 00:17:04.606 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x8000 length 0x8000 00:17:04.606 nvme0n2 : 5.76 133.26 8.33 0.00 0.00 914790.66 112116.97 819502.47 00:17:04.606 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x0 length 0x8000 00:17:04.606 nvme0n3 : 5.76 130.67 8.17 0.00 0.00 894345.18 50815.61 1613193.85 00:17:04.606 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x8000 length 0x8000 00:17:04.606 nvme0n3 : 5.70 109.48 6.84 0.00 0.00 1079090.15 43757.88 2387526.89 00:17:04.606 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x0 length 0x2000 00:17:04.606 nvme1n1 : 5.69 132.23 8.26 0.00 0.00 860618.54 74206.92 1716438.25 00:17:04.606 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x2000 length 0x2000 00:17:04.606 nvme1n1 : 5.70 101.02 6.31 0.00 0.00 1143982.69 113730.17 2103604.78 00:17:04.606 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x0 length 0xbd0b 00:17:04.606 nvme2n1 : 5.81 154.28 9.64 0.00 0.00 720155.34 24298.73 1387346.71 00:17:04.606 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:04.606 nvme2n1 : 5.78 136.81 8.55 0.00 0.00 822010.98 46782.62 1013085.74 00:17:04.606 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0x0 length 0xa000 00:17:04.606 nvme3n1 : 5.83 179.67 11.23 0.00 0.00 601196.61 601.80 1071160.71 00:17:04.606 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:04.606 Verification LBA range: start 0xa000 length 0xa000 00:17:04.606 nvme3n1 : 5.78 163.34 10.21 0.00 0.00 666742.29 4234.63 1329271.73 00:17:04.606 [2024-11-21T05:03:21.340Z] =================================================================================================================== 00:17:04.606 [2024-11-21T05:03:21.340Z] Total : 1621.97 101.37 0.00 0.00 861179.26 601.80 2387526.89 00:17:04.606 00:17:04.606 real 0m6.672s 00:17:04.606 user 0m12.116s 00:17:04.606 sys 0m0.533s 00:17:04.606 05:03:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.606 ************************************ 00:17:04.606 END TEST bdev_verify_big_io 00:17:04.606 ************************************ 00:17:04.606 05:03:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.606 05:03:21 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:04.606 05:03:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:04.606 05:03:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.606 05:03:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:04.606 ************************************ 00:17:04.606 START TEST bdev_write_zeroes 00:17:04.606 ************************************ 00:17:04.606 05:03:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:04.606 [2024-11-21 05:03:21.253456] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:04.606 [2024-11-21 05:03:21.253581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84739 ] 00:17:04.868 [2024-11-21 05:03:21.414246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.868 [2024-11-21 05:03:21.438736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.129 Running I/O for 1 seconds... 00:17:06.074 76544.00 IOPS, 299.00 MiB/s 00:17:06.074 Latency(us) 00:17:06.074 [2024-11-21T05:03:22.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.074 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:06.074 nvme0n1 : 1.01 12616.64 49.28 0.00 0.00 10134.85 4259.84 21979.77 00:17:06.074 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:06.074 nvme0n2 : 1.02 12475.04 48.73 0.00 0.00 10242.56 6049.48 22483.89 00:17:06.074 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:06.074 nvme0n3 : 1.02 12564.71 49.08 0.00 0.00 10162.00 5999.06 21878.94 00:17:06.074 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:06.074 nvme1n1 : 1.02 12550.21 49.02 0.00 0.00 10166.83 6024.27 20971.52 00:17:06.074 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:06.074 nvme2n1 : 1.02 13462.39 52.59 0.00 0.00 9469.28 4511.90 21072.34 00:17:06.074 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:06.074 nvme3n1 : 1.02 12661.33 49.46 0.00 0.00 10001.42 3579.27 22282.24 00:17:06.074 [2024-11-21T05:03:22.808Z] =================================================================================================================== 00:17:06.074 [2024-11-21T05:03:22.808Z] Total : 76330.33 298.17 0.00 0.00 10022.63 3579.27 22483.89 00:17:06.335 00:17:06.335 real 0m1.679s 00:17:06.335 user 0m1.027s 00:17:06.335 sys 0m0.465s 00:17:06.335 ************************************ 00:17:06.335 05:03:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.335 05:03:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:06.335 END TEST bdev_write_zeroes 00:17:06.335 ************************************ 00:17:06.335 05:03:22 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:06.335 05:03:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:06.335 05:03:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.335 05:03:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.335 ************************************ 00:17:06.335 START TEST bdev_json_nonenclosed 00:17:06.335 ************************************ 00:17:06.335 05:03:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:06.335 [2024-11-21 05:03:22.997734] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:06.335 [2024-11-21 05:03:22.997864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84782 ] 00:17:06.596 [2024-11-21 05:03:23.159049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.596 [2024-11-21 05:03:23.183043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.596 [2024-11-21 05:03:23.183136] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:06.596 [2024-11-21 05:03:23.183152] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:06.596 [2024-11-21 05:03:23.183167] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:06.596 00:17:06.596 real 0m0.318s 00:17:06.596 user 0m0.119s 00:17:06.596 sys 0m0.095s 00:17:06.596 05:03:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.596 ************************************ 00:17:06.596 END TEST bdev_json_nonenclosed 00:17:06.596 ************************************ 00:17:06.596 05:03:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:06.596 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:06.596 05:03:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:06.596 05:03:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.596 05:03:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.596 ************************************ 00:17:06.596 START TEST bdev_json_nonarray 00:17:06.596 ************************************ 00:17:06.596 05:03:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:06.857 [2024-11-21 05:03:23.381292] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:06.857 [2024-11-21 05:03:23.381417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84802 ] 00:17:06.857 [2024-11-21 05:03:23.542658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.857 [2024-11-21 05:03:23.566602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.857 [2024-11-21 05:03:23.566713] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:06.857 [2024-11-21 05:03:23.566729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:06.857 [2024-11-21 05:03:23.566744] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:07.118 00:17:07.118 real 0m0.317s 00:17:07.118 user 0m0.102s 00:17:07.118 sys 0m0.112s 00:17:07.118 05:03:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.118 ************************************ 00:17:07.118 05:03:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:07.118 END TEST bdev_json_nonarray 00:17:07.118 ************************************ 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:07.118 05:03:23 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:07.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:17.770 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:17.770 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:17.770 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:17.770 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:17.770 00:17:17.770 real 0m52.846s 00:17:17.770 user 1m10.704s 00:17:17.770 sys 0m44.252s 00:17:17.770 05:03:33 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.770 05:03:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 ************************************ 00:17:17.770 END TEST blockdev_xnvme 00:17:17.770 ************************************ 00:17:17.770 05:03:33 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:17.770 05:03:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.770 05:03:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.770 05:03:33 -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 ************************************ 00:17:17.770 START TEST ublk 00:17:17.770 ************************************ 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:17.770 * Looking for test storage... 00:17:17.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.770 05:03:33 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.770 05:03:33 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.770 05:03:33 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.770 05:03:33 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.770 05:03:33 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.770 05:03:33 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.770 05:03:33 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.770 05:03:33 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:17.770 05:03:33 ublk -- scripts/common.sh@345 -- # : 1 00:17:17.770 05:03:33 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.770 05:03:33 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.770 05:03:33 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:17.770 05:03:33 ublk -- scripts/common.sh@353 -- # local d=1 00:17:17.770 05:03:33 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.770 05:03:33 ublk -- scripts/common.sh@355 -- # echo 1 00:17:17.770 05:03:33 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.770 05:03:33 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@353 -- # local d=2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.770 05:03:33 ublk -- scripts/common.sh@355 -- # echo 2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.770 05:03:33 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.770 05:03:33 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.770 05:03:33 ublk -- scripts/common.sh@368 -- # return 0 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:17.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.770 --rc genhtml_branch_coverage=1 00:17:17.770 --rc genhtml_function_coverage=1 00:17:17.770 --rc genhtml_legend=1 00:17:17.770 --rc geninfo_all_blocks=1 00:17:17.770 --rc geninfo_unexecuted_blocks=1 00:17:17.770 00:17:17.770 ' 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:17.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.770 --rc genhtml_branch_coverage=1 00:17:17.770 --rc genhtml_function_coverage=1 00:17:17.770 --rc genhtml_legend=1 00:17:17.770 --rc geninfo_all_blocks=1 00:17:17.770 --rc geninfo_unexecuted_blocks=1 00:17:17.770 00:17:17.770 ' 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:17.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.770 --rc genhtml_branch_coverage=1 00:17:17.770 --rc genhtml_function_coverage=1 00:17:17.770 --rc genhtml_legend=1 00:17:17.770 --rc geninfo_all_blocks=1 00:17:17.770 --rc geninfo_unexecuted_blocks=1 00:17:17.770 00:17:17.770 ' 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:17.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.770 --rc genhtml_branch_coverage=1 00:17:17.770 --rc genhtml_function_coverage=1 00:17:17.770 --rc genhtml_legend=1 00:17:17.770 --rc geninfo_all_blocks=1 00:17:17.770 --rc geninfo_unexecuted_blocks=1 00:17:17.770 00:17:17.770 ' 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:17.770 05:03:33 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:17.770 05:03:33 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:17.770 05:03:33 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:17.770 05:03:33 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:17.770 05:03:33 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:17.770 05:03:33 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:17.770 05:03:33 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:17.770 05:03:33 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:17.770 05:03:33 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.770 05:03:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 ************************************ 00:17:17.770 START TEST test_save_ublk_config 00:17:17.770 ************************************ 00:17:17.770 05:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:17.770 05:03:33 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:17.770 05:03:33 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=85100 00:17:17.770 05:03:33 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:17.770 05:03:33 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:17.770 05:03:33 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 85100 00:17:17.770 05:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85100 ']' 00:17:17.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.771 05:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.771 05:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.771 05:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.771 05:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.771 05:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:17.771 [2024-11-21 05:03:33.906167] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:17.771 [2024-11-21 05:03:33.907010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85100 ] 00:17:17.771 [2024-11-21 05:03:34.080595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.771 [2024-11-21 05:03:34.121600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.030 05:03:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.030 05:03:34 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:18.030 05:03:34 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:18.030 05:03:34 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:18.030 05:03:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.030 05:03:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:18.030 [2024-11-21 05:03:34.748631] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:18.030 [2024-11-21 05:03:34.749426] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:18.291 malloc0 00:17:18.291 [2024-11-21 05:03:34.780767] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:18.291 [2024-11-21 05:03:34.780858] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:18.291 [2024-11-21 05:03:34.780866] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:18.291 [2024-11-21 05:03:34.780884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:18.291 [2024-11-21 05:03:34.789730] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:18.291 [2024-11-21 05:03:34.789767] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:18.291 [2024-11-21 05:03:34.796641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:18.291 [2024-11-21 05:03:34.796753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:18.291 [2024-11-21 05:03:34.813635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:18.291 0 00:17:18.291 05:03:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.291 05:03:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:18.291 05:03:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.291 05:03:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:18.551 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.551 05:03:35 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:18.551 "subsystems": [ 00:17:18.551 { 00:17:18.551 "subsystem": "fsdev", 00:17:18.551 "config": [ 00:17:18.551 { 00:17:18.551 "method": "fsdev_set_opts", 00:17:18.551 "params": { 00:17:18.551 "fsdev_io_pool_size": 65535, 00:17:18.551 "fsdev_io_cache_size": 256 00:17:18.551 } 00:17:18.551 } 00:17:18.551 ] 00:17:18.551 }, 00:17:18.551 { 00:17:18.551 "subsystem": "keyring", 00:17:18.551 "config": [] 00:17:18.551 }, 00:17:18.551 { 00:17:18.551 "subsystem": "iobuf", 00:17:18.551 "config": [ 00:17:18.551 { 00:17:18.551 "method": "iobuf_set_options", 00:17:18.551 "params": { 00:17:18.551 "small_pool_count": 8192, 00:17:18.551 "large_pool_count": 1024, 00:17:18.551 "small_bufsize": 8192, 00:17:18.551 "large_bufsize": 135168, 00:17:18.551 "enable_numa": false 00:17:18.552 } 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "sock", 00:17:18.552 "config": [ 00:17:18.552 { 00:17:18.552 "method": "sock_set_default_impl", 00:17:18.552 "params": { 00:17:18.552 "impl_name": "posix" 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "sock_impl_set_options", 00:17:18.552 "params": { 00:17:18.552 "impl_name": "ssl", 00:17:18.552 "recv_buf_size": 4096, 00:17:18.552 "send_buf_size": 4096, 00:17:18.552 "enable_recv_pipe": true, 00:17:18.552 "enable_quickack": false, 00:17:18.552 "enable_placement_id": 0, 00:17:18.552 "enable_zerocopy_send_server": true, 00:17:18.552 "enable_zerocopy_send_client": false, 00:17:18.552 "zerocopy_threshold": 0, 00:17:18.552 "tls_version": 0, 00:17:18.552 "enable_ktls": false 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "sock_impl_set_options", 00:17:18.552 "params": { 00:17:18.552 "impl_name": "posix", 00:17:18.552 "recv_buf_size": 2097152, 00:17:18.552 "send_buf_size": 2097152, 00:17:18.552 "enable_recv_pipe": true, 00:17:18.552 "enable_quickack": false, 00:17:18.552 "enable_placement_id": 0, 00:17:18.552 "enable_zerocopy_send_server": true, 00:17:18.552 "enable_zerocopy_send_client": false, 00:17:18.552 "zerocopy_threshold": 0, 00:17:18.552 "tls_version": 0, 00:17:18.552 "enable_ktls": false 00:17:18.552 } 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "vmd", 00:17:18.552 "config": [] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "accel", 00:17:18.552 "config": [ 00:17:18.552 { 00:17:18.552 "method": "accel_set_options", 00:17:18.552 "params": { 00:17:18.552 "small_cache_size": 128, 00:17:18.552 "large_cache_size": 16, 00:17:18.552 "task_count": 2048, 00:17:18.552 "sequence_count": 2048, 00:17:18.552 "buf_count": 2048 00:17:18.552 } 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "bdev", 00:17:18.552 "config": [ 00:17:18.552 { 00:17:18.552 "method": "bdev_set_options", 00:17:18.552 "params": { 00:17:18.552 "bdev_io_pool_size": 65535, 00:17:18.552 "bdev_io_cache_size": 256, 00:17:18.552 "bdev_auto_examine": true, 00:17:18.552 "iobuf_small_cache_size": 128, 00:17:18.552 "iobuf_large_cache_size": 16 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "bdev_raid_set_options", 00:17:18.552 "params": { 00:17:18.552 "process_window_size_kb": 1024, 00:17:18.552 "process_max_bandwidth_mb_sec": 0 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "bdev_iscsi_set_options", 00:17:18.552 "params": { 00:17:18.552 "timeout_sec": 30 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "bdev_nvme_set_options", 00:17:18.552 "params": { 00:17:18.552 "action_on_timeout": "none", 00:17:18.552 "timeout_us": 0, 00:17:18.552 "timeout_admin_us": 0, 00:17:18.552 "keep_alive_timeout_ms": 10000, 00:17:18.552 "arbitration_burst": 0, 00:17:18.552 "low_priority_weight": 0, 00:17:18.552 "medium_priority_weight": 0, 00:17:18.552 "high_priority_weight": 0, 00:17:18.552 "nvme_adminq_poll_period_us": 10000, 00:17:18.552 "nvme_ioq_poll_period_us": 0, 00:17:18.552 "io_queue_requests": 0, 00:17:18.552 "delay_cmd_submit": true, 00:17:18.552 "transport_retry_count": 4, 00:17:18.552 "bdev_retry_count": 3, 00:17:18.552 "transport_ack_timeout": 0, 00:17:18.552 "ctrlr_loss_timeout_sec": 0, 00:17:18.552 "reconnect_delay_sec": 0, 00:17:18.552 "fast_io_fail_timeout_sec": 0, 00:17:18.552 "disable_auto_failback": false, 00:17:18.552 "generate_uuids": false, 00:17:18.552 "transport_tos": 0, 00:17:18.552 "nvme_error_stat": false, 00:17:18.552 "rdma_srq_size": 0, 00:17:18.552 "io_path_stat": false, 00:17:18.552 "allow_accel_sequence": false, 00:17:18.552 "rdma_max_cq_size": 0, 00:17:18.552 "rdma_cm_event_timeout_ms": 0, 00:17:18.552 "dhchap_digests": [ 00:17:18.552 "sha256", 00:17:18.552 "sha384", 00:17:18.552 "sha512" 00:17:18.552 ], 00:17:18.552 "dhchap_dhgroups": [ 00:17:18.552 "null", 00:17:18.552 "ffdhe2048", 00:17:18.552 "ffdhe3072", 00:17:18.552 "ffdhe4096", 00:17:18.552 "ffdhe6144", 00:17:18.552 "ffdhe8192" 00:17:18.552 ] 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "bdev_nvme_set_hotplug", 00:17:18.552 "params": { 00:17:18.552 "period_us": 100000, 00:17:18.552 "enable": false 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "bdev_malloc_create", 00:17:18.552 "params": { 00:17:18.552 "name": "malloc0", 00:17:18.552 "num_blocks": 8192, 00:17:18.552 "block_size": 4096, 00:17:18.552 "physical_block_size": 4096, 00:17:18.552 "uuid": "de363f0c-51aa-4056-bfaf-ca851089cd51", 00:17:18.552 "optimal_io_boundary": 0, 00:17:18.552 "md_size": 0, 00:17:18.552 "dif_type": 0, 00:17:18.552 "dif_is_head_of_md": false, 00:17:18.552 "dif_pi_format": 0 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "bdev_wait_for_examine" 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "scsi", 00:17:18.552 "config": null 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "scheduler", 00:17:18.552 "config": [ 00:17:18.552 { 00:17:18.552 "method": "framework_set_scheduler", 00:17:18.552 "params": { 00:17:18.552 "name": "static" 00:17:18.552 } 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "vhost_scsi", 00:17:18.552 "config": [] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "vhost_blk", 00:17:18.552 "config": [] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "ublk", 00:17:18.552 "config": [ 00:17:18.552 { 00:17:18.552 "method": "ublk_create_target", 00:17:18.552 "params": { 00:17:18.552 "cpumask": "1" 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "ublk_start_disk", 00:17:18.552 "params": { 00:17:18.552 "bdev_name": "malloc0", 00:17:18.552 "ublk_id": 0, 00:17:18.552 "num_queues": 1, 00:17:18.552 "queue_depth": 128 00:17:18.552 } 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "nbd", 00:17:18.552 "config": [] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "nvmf", 00:17:18.552 "config": [ 00:17:18.552 { 00:17:18.552 "method": "nvmf_set_config", 00:17:18.552 "params": { 00:17:18.552 "discovery_filter": "match_any", 00:17:18.552 "admin_cmd_passthru": { 00:17:18.552 "identify_ctrlr": false 00:17:18.552 }, 00:17:18.552 "dhchap_digests": [ 00:17:18.552 "sha256", 00:17:18.552 "sha384", 00:17:18.552 "sha512" 00:17:18.552 ], 00:17:18.552 "dhchap_dhgroups": [ 00:17:18.552 "null", 00:17:18.552 "ffdhe2048", 00:17:18.552 "ffdhe3072", 00:17:18.552 "ffdhe4096", 00:17:18.552 "ffdhe6144", 00:17:18.552 "ffdhe8192" 00:17:18.552 ] 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "nvmf_set_max_subsystems", 00:17:18.552 "params": { 00:17:18.552 "max_subsystems": 1024 00:17:18.552 } 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "method": "nvmf_set_crdt", 00:17:18.552 "params": { 00:17:18.552 "crdt1": 0, 00:17:18.552 "crdt2": 0, 00:17:18.552 "crdt3": 0 00:17:18.552 } 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }, 00:17:18.552 { 00:17:18.552 "subsystem": "iscsi", 00:17:18.552 "config": [ 00:17:18.552 { 00:17:18.552 "method": "iscsi_set_options", 00:17:18.552 "params": { 00:17:18.552 "node_base": "iqn.2016-06.io.spdk", 00:17:18.552 "max_sessions": 128, 00:17:18.552 "max_connections_per_session": 2, 00:17:18.552 "max_queue_depth": 64, 00:17:18.552 "default_time2wait": 2, 00:17:18.552 "default_time2retain": 20, 00:17:18.552 "first_burst_length": 8192, 00:17:18.552 "immediate_data": true, 00:17:18.552 "allow_duplicated_isid": false, 00:17:18.552 "error_recovery_level": 0, 00:17:18.552 "nop_timeout": 60, 00:17:18.552 "nop_in_interval": 30, 00:17:18.552 "disable_chap": false, 00:17:18.552 "require_chap": false, 00:17:18.552 "mutual_chap": false, 00:17:18.552 "chap_group": 0, 00:17:18.552 "max_large_datain_per_connection": 64, 00:17:18.552 "max_r2t_per_connection": 4, 00:17:18.552 "pdu_pool_size": 36864, 00:17:18.552 "immediate_data_pool_size": 16384, 00:17:18.552 "data_out_pool_size": 2048 00:17:18.552 } 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 } 00:17:18.552 ] 00:17:18.552 }' 00:17:18.552 05:03:35 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 85100 00:17:18.552 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85100 ']' 00:17:18.552 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85100 00:17:18.552 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:18.552 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.553 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85100 00:17:18.553 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.553 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.553 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85100' 00:17:18.553 killing process with pid 85100 00:17:18.553 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85100 00:17:18.553 05:03:35 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85100 00:17:18.813 [2024-11-21 05:03:35.517908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:19.073 [2024-11-21 05:03:35.549777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:19.073 [2024-11-21 05:03:35.549945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:19.073 [2024-11-21 05:03:35.556654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:19.073 [2024-11-21 05:03:35.556738] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:19.073 [2024-11-21 05:03:35.556751] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:19.073 [2024-11-21 05:03:35.556788] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:19.073 [2024-11-21 05:03:35.556950] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=85139 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 85139 00:17:19.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85139 ']' 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:19.646 05:03:36 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:19.646 "subsystems": [ 00:17:19.646 { 00:17:19.646 "subsystem": "fsdev", 00:17:19.646 "config": [ 00:17:19.646 { 00:17:19.646 "method": "fsdev_set_opts", 00:17:19.646 "params": { 00:17:19.646 "fsdev_io_pool_size": 65535, 00:17:19.646 "fsdev_io_cache_size": 256 00:17:19.646 } 00:17:19.646 } 00:17:19.646 ] 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "subsystem": "keyring", 00:17:19.646 "config": [] 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "subsystem": "iobuf", 00:17:19.646 "config": [ 00:17:19.646 { 00:17:19.646 "method": "iobuf_set_options", 00:17:19.646 "params": { 00:17:19.646 "small_pool_count": 8192, 00:17:19.646 "large_pool_count": 1024, 00:17:19.646 "small_bufsize": 8192, 00:17:19.646 "large_bufsize": 135168, 00:17:19.646 "enable_numa": false 00:17:19.646 } 00:17:19.646 } 00:17:19.646 ] 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "subsystem": "sock", 00:17:19.646 "config": [ 00:17:19.646 { 00:17:19.646 "method": "sock_set_default_impl", 00:17:19.646 "params": { 00:17:19.646 "impl_name": "posix" 00:17:19.646 } 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "method": "sock_impl_set_options", 00:17:19.646 "params": { 00:17:19.646 "impl_name": "ssl", 00:17:19.646 "recv_buf_size": 4096, 00:17:19.646 "send_buf_size": 4096, 00:17:19.646 "enable_recv_pipe": true, 00:17:19.646 "enable_quickack": false, 00:17:19.646 "enable_placement_id": 0, 00:17:19.646 "enable_zerocopy_send_server": true, 00:17:19.646 "enable_zerocopy_send_client": false, 00:17:19.646 "zerocopy_threshold": 0, 00:17:19.646 "tls_version": 0, 00:17:19.646 "enable_ktls": false 00:17:19.646 } 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "method": "sock_impl_set_options", 00:17:19.646 "params": { 00:17:19.646 "impl_name": "posix", 00:17:19.646 "recv_buf_size": 2097152, 00:17:19.646 "send_buf_size": 2097152, 00:17:19.646 "enable_recv_pipe": true, 00:17:19.646 "enable_quickack": false, 00:17:19.646 "enable_placement_id": 0, 00:17:19.646 "enable_zerocopy_send_server": true, 00:17:19.646 "enable_zerocopy_send_client": false, 00:17:19.646 "zerocopy_threshold": 0, 00:17:19.646 "tls_version": 0, 00:17:19.646 "enable_ktls": false 00:17:19.646 } 00:17:19.646 } 00:17:19.646 ] 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "subsystem": "vmd", 00:17:19.646 "config": [] 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "subsystem": "accel", 00:17:19.646 "config": [ 00:17:19.646 { 00:17:19.646 "method": "accel_set_options", 00:17:19.646 "params": { 00:17:19.646 "small_cache_size": 128, 00:17:19.646 "large_cache_size": 16, 00:17:19.646 "task_count": 2048, 00:17:19.646 "sequence_count": 2048, 00:17:19.646 "buf_count": 2048 00:17:19.646 } 00:17:19.646 } 00:17:19.646 ] 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "subsystem": "bdev", 00:17:19.646 "config": [ 00:17:19.646 { 00:17:19.646 "method": "bdev_set_options", 00:17:19.646 "params": { 00:17:19.646 "bdev_io_pool_size": 65535, 00:17:19.646 "bdev_io_cache_size": 256, 00:17:19.646 "bdev_auto_examine": true, 00:17:19.646 "iobuf_small_cache_size": 128, 00:17:19.646 "iobuf_large_cache_size": 16 00:17:19.646 } 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "method": "bdev_raid_set_options", 00:17:19.646 "params": { 00:17:19.646 "process_window_size_kb": 1024, 00:17:19.646 "process_max_bandwidth_mb_sec": 0 00:17:19.646 } 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "method": "bdev_iscsi_set_options", 00:17:19.646 "params": { 00:17:19.646 "timeout_sec": 30 00:17:19.646 } 00:17:19.646 }, 00:17:19.646 { 00:17:19.646 "method": "bdev_nvme_set_options", 00:17:19.646 "params": { 00:17:19.646 "action_on_timeout": "none", 00:17:19.646 "timeout_us": 0, 00:17:19.646 "timeout_admin_us": 0, 00:17:19.646 "keep_alive_timeout_ms": 10000, 00:17:19.646 "arbitration_burst": 0, 00:17:19.646 "low_priority_weight": 0, 00:17:19.646 "medium_priority_weight": 0, 00:17:19.646 "high_priority_weight": 0, 00:17:19.647 "nvme_adminq_poll_period_us": 10000, 00:17:19.647 "nvme_ioq_poll_period_us": 0, 00:17:19.647 "io_queue_requests": 0, 00:17:19.647 "delay_cmd_submit": true, 00:17:19.647 "transport_retry_count": 4, 00:17:19.647 "bdev_retry_count": 3, 00:17:19.647 "transport_ack_timeout": 0, 00:17:19.647 "ctrlr_loss_timeout_sec": 0, 00:17:19.647 "reconnect_delay_sec": 0, 00:17:19.647 "fast_io_fail_timeout_sec": 0, 00:17:19.647 "disable_auto_failback": false, 00:17:19.647 "generate_uuids": false, 00:17:19.647 "transport_tos": 0, 00:17:19.647 "nvme_error_stat": false, 00:17:19.647 "rdma_srq_size": 0, 00:17:19.647 "io_path_stat": false, 00:17:19.647 "allow_accel_sequence": false, 00:17:19.647 "rdma_max_cq_size": 0, 00:17:19.647 "rdma_cm_event_timeout_ms": 0, 00:17:19.647 "dhchap_digests": [ 00:17:19.647 "sha256", 00:17:19.647 "sha384", 00:17:19.647 "sha512" 00:17:19.647 ], 00:17:19.647 "dhchap_dhgroups": [ 00:17:19.647 "null", 00:17:19.647 "ffdhe2048", 00:17:19.647 "ffdhe3072", 00:17:19.647 "ffdhe4096", 00:17:19.647 "ffdhe6144", 00:17:19.647 "ffdhe8192" 00:17:19.647 ] 00:17:19.647 } 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "method": "bdev_nvme_set_hotplug", 00:17:19.647 "params": { 00:17:19.647 "period_us": 100000, 00:17:19.647 "enable": false 00:17:19.647 } 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "method": "bdev_malloc_create", 00:17:19.647 "params": { 00:17:19.647 "name": "malloc0", 00:17:19.647 "num_blocks": 8192, 00:17:19.647 "block_size": 4096, 00:17:19.647 "physical_block_size": 4096, 00:17:19.647 "uuid": "de363f0c-51aa-4056-bfaf-ca851089cd51", 00:17:19.647 "optimal_io_boundary": 0, 00:17:19.647 "md_size": 0, 00:17:19.647 "dif_type": 0, 00:17:19.647 "dif_is_head_of_md": false, 00:17:19.647 "dif_pi_format": 0 00:17:19.647 } 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "method": "bdev_wait_for_examine" 00:17:19.647 } 00:17:19.647 ] 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "scsi", 00:17:19.647 "config": null 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "scheduler", 00:17:19.647 "config": [ 00:17:19.647 { 00:17:19.647 "method": "framework_set_scheduler", 00:17:19.647 "params": { 00:17:19.647 "name": "static" 00:17:19.647 } 00:17:19.647 } 00:17:19.647 ] 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "vhost_scsi", 00:17:19.647 "config": [] 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "vhost_blk", 00:17:19.647 "config": [] 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "ublk", 00:17:19.647 "config": [ 00:17:19.647 { 00:17:19.647 "method": "ublk_create_target", 00:17:19.647 "params": { 00:17:19.647 "cpumask": "1" 00:17:19.647 } 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "method": "ublk_start_disk", 00:17:19.647 "params": { 00:17:19.647 "bdev_name": "malloc0", 00:17:19.647 "ublk_id": 0, 00:17:19.647 "num_queues": 1, 00:17:19.647 "queue_depth": 128 00:17:19.647 } 00:17:19.647 } 00:17:19.647 ] 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "nbd", 00:17:19.647 "config": [] 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "nvmf", 00:17:19.647 "config": [ 00:17:19.647 { 00:17:19.647 "method": "nvmf_set_config", 00:17:19.647 "params": { 00:17:19.647 "discovery_filter": "match_any", 00:17:19.647 "admin_cmd_passthru": { 00:17:19.647 "identify_ctrlr": false 00:17:19.647 }, 00:17:19.647 "dhchap_digests": [ 00:17:19.647 "sha256", 00:17:19.647 "sha384", 00:17:19.647 "sha512" 00:17:19.647 ], 00:17:19.647 "dhchap_dhgroups": [ 00:17:19.647 "null", 00:17:19.647 "ffdhe2048", 00:17:19.647 "ffdhe3072", 00:17:19.647 "ffdhe4096", 00:17:19.647 "ffdhe6144", 00:17:19.647 "ffdhe8192" 00:17:19.647 ] 00:17:19.647 } 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "method": "nvmf_set_max_subsystems", 00:17:19.647 "params": { 00:17:19.647 "max_subsystems": 1024 00:17:19.647 } 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "method": "nvmf_set_crdt", 00:17:19.647 "params": { 00:17:19.647 "crdt1": 0, 00:17:19.647 "crdt2": 0, 00:17:19.647 "crdt3": 0 00:17:19.647 } 00:17:19.647 } 00:17:19.647 ] 00:17:19.647 }, 00:17:19.647 { 00:17:19.647 "subsystem": "iscsi", 00:17:19.647 "config": [ 00:17:19.647 { 00:17:19.647 "method": "iscsi_set_options", 00:17:19.647 "params": { 00:17:19.647 "node_base": "iqn.2016-06.io.spdk", 00:17:19.647 "max_sessions": 128, 00:17:19.647 "max_connections_per_session": 2, 00:17:19.647 "max_queue_depth": 64, 00:17:19.647 "default_time2wait": 2, 00:17:19.647 "default_time2retain": 20, 00:17:19.647 "first_burst_length": 8192, 00:17:19.647 "immediate_data": true, 00:17:19.647 "allow_duplicated_isid": false, 00:17:19.647 "error_recovery_level": 0, 00:17:19.647 "nop_timeout": 60, 00:17:19.647 "nop_in_interval": 30, 00:17:19.647 "disable_chap": false, 00:17:19.647 "require_chap": false, 00:17:19.647 "mutual_chap": false, 00:17:19.647 "chap_group": 0, 00:17:19.647 "max_large_datain_per_connection": 64, 00:17:19.647 "max_r2t_per_connection": 4, 00:17:19.647 "pdu_pool_size": 36864, 00:17:19.647 "immediate_data_pool_size": 16384, 00:17:19.647 "data_out_pool_size": 2048 00:17:19.647 } 00:17:19.647 } 00:17:19.647 ] 00:17:19.647 } 00:17:19.647 ] 00:17:19.647 }' 00:17:19.647 [2024-11-21 05:03:36.243816] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:19.647 [2024-11-21 05:03:36.243993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85139 ] 00:17:19.909 [2024-11-21 05:03:36.409701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.909 [2024-11-21 05:03:36.450998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.483 [2024-11-21 05:03:36.931636] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:20.483 [2024-11-21 05:03:36.932073] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:20.483 [2024-11-21 05:03:36.939784] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:20.483 [2024-11-21 05:03:36.939887] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:20.483 [2024-11-21 05:03:36.939897] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:20.483 [2024-11-21 05:03:36.939909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:20.483 [2024-11-21 05:03:36.948768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:20.483 [2024-11-21 05:03:36.948800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:20.483 [2024-11-21 05:03:36.955661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:20.483 [2024-11-21 05:03:36.955804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:20.483 [2024-11-21 05:03:36.972639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 85139 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85139 ']' 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85139 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85139 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.483 killing process with pid 85139 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85139' 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85139 00:17:20.483 05:03:37 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85139 00:17:21.056 [2024-11-21 05:03:37.564336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:21.056 [2024-11-21 05:03:37.603670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:21.056 [2024-11-21 05:03:37.603820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:21.056 [2024-11-21 05:03:37.612655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:21.056 [2024-11-21 05:03:37.612748] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:21.056 [2024-11-21 05:03:37.612761] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:21.056 [2024-11-21 05:03:37.612791] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:21.056 [2024-11-21 05:03:37.612969] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:21.629 05:03:38 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:21.629 00:17:21.629 real 0m4.404s 00:17:21.629 user 0m2.792s 00:17:21.629 sys 0m2.292s 00:17:21.629 ************************************ 00:17:21.629 END TEST test_save_ublk_config 00:17:21.629 ************************************ 00:17:21.629 05:03:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.629 05:03:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:21.629 05:03:38 ublk -- ublk/ublk.sh@139 -- # spdk_pid=85195 00:17:21.629 05:03:38 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.629 05:03:38 ublk -- ublk/ublk.sh@141 -- # waitforlisten 85195 00:17:21.629 05:03:38 ublk -- common/autotest_common.sh@835 -- # '[' -z 85195 ']' 00:17:21.629 05:03:38 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:21.629 05:03:38 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.629 05:03:38 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.629 05:03:38 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.629 05:03:38 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.629 05:03:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:21.629 [2024-11-21 05:03:38.357872] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:21.629 [2024-11-21 05:03:38.358060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85195 ] 00:17:21.891 [2024-11-21 05:03:38.519867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:21.891 [2024-11-21 05:03:38.558771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.891 [2024-11-21 05:03:38.558910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.836 05:03:39 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.836 05:03:39 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:22.836 05:03:39 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:22.836 05:03:39 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:22.836 05:03:39 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.836 05:03:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:22.836 ************************************ 00:17:22.836 START TEST test_create_ublk 00:17:22.836 ************************************ 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:22.836 [2024-11-21 05:03:39.232637] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:22.836 [2024-11-21 05:03:39.234934] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:22.836 [2024-11-21 05:03:39.344827] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:22.836 [2024-11-21 05:03:39.345347] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:22.836 [2024-11-21 05:03:39.345368] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:22.836 [2024-11-21 05:03:39.345380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:22.836 [2024-11-21 05:03:39.352735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:22.836 [2024-11-21 05:03:39.352778] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:22.836 [2024-11-21 05:03:39.360637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:22.836 [2024-11-21 05:03:39.361451] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:22.836 [2024-11-21 05:03:39.390654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:22.836 05:03:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:22.836 { 00:17:22.836 "ublk_device": "/dev/ublkb0", 00:17:22.836 "id": 0, 00:17:22.836 "queue_depth": 512, 00:17:22.836 "num_queues": 4, 00:17:22.836 "bdev_name": "Malloc0" 00:17:22.836 } 00:17:22.836 ]' 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:22.836 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:23.098 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:23.098 05:03:39 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:23.098 05:03:39 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:23.098 fio: verification read phase will never start because write phase uses all of runtime 00:17:23.098 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:23.098 fio-3.35 00:17:23.098 Starting 1 process 00:17:35.303 00:17:35.303 fio_test: (groupid=0, jobs=1): err= 0: pid=85240: Thu Nov 21 05:03:49 2024 00:17:35.303 write: IOPS=19.3k, BW=75.2MiB/s (78.9MB/s)(752MiB/10001msec); 0 zone resets 00:17:35.303 clat (usec): min=31, max=3865, avg=51.23, stdev=79.65 00:17:35.303 lat (usec): min=32, max=3866, avg=51.65, stdev=79.66 00:17:35.303 clat percentiles (usec): 00:17:35.303 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 43], 00:17:35.303 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 50], 00:17:35.303 | 70.00th=[ 51], 80.00th=[ 54], 90.00th=[ 58], 95.00th=[ 63], 00:17:35.303 | 99.00th=[ 75], 99.50th=[ 81], 99.90th=[ 1221], 99.95th=[ 2409], 00:17:35.303 | 99.99th=[ 3458] 00:17:35.303 bw ( KiB/s): min=60792, max=87008, per=99.30%, avg=76476.21, stdev=7758.97, samples=19 00:17:35.303 iops : min=15198, max=21752, avg=19119.05, stdev=1939.74, samples=19 00:17:35.303 lat (usec) : 50=62.91%, 100=36.82%, 250=0.12%, 500=0.02%, 750=0.01% 00:17:35.303 lat (usec) : 1000=0.01% 00:17:35.303 lat (msec) : 2=0.05%, 4=0.06% 00:17:35.303 cpu : usr=2.36%, sys=13.26%, ctx=192603, majf=0, minf=795 00:17:35.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:35.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.303 issued rwts: total=0,192560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:35.303 00:17:35.303 Run status group 0 (all jobs): 00:17:35.303 WRITE: bw=75.2MiB/s (78.9MB/s), 75.2MiB/s-75.2MiB/s (78.9MB/s-78.9MB/s), io=752MiB (789MB), run=10001-10001msec 00:17:35.303 00:17:35.303 Disk stats (read/write): 00:17:35.303 ublkb0: ios=0/190241, merge=0/0, ticks=0/8450, in_queue=8450, util=99.10% 00:17:35.303 05:03:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 [2024-11-21 05:03:49.822144] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:35.303 [2024-11-21 05:03:49.856171] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:35.303 [2024-11-21 05:03:49.856966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:35.303 [2024-11-21 05:03:49.863636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:35.303 [2024-11-21 05:03:49.863895] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:35.303 [2024-11-21 05:03:49.863902] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.303 05:03:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 [2024-11-21 05:03:49.879717] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:35.303 request: 00:17:35.303 { 00:17:35.303 "ublk_id": 0, 00:17:35.303 "method": "ublk_stop_disk", 00:17:35.303 "req_id": 1 00:17:35.303 } 00:17:35.303 Got JSON-RPC error response 00:17:35.303 response: 00:17:35.303 { 00:17:35.303 "code": -19, 00:17:35.303 "message": "No such device" 00:17:35.303 } 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.303 05:03:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 [2024-11-21 05:03:49.895688] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:35.303 [2024-11-21 05:03:49.897210] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:35.303 [2024-11-21 05:03:49.897239] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.303 05:03:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.303 05:03:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:35.303 05:03:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 05:03:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.303 05:03:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:35.303 05:03:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:35.303 05:03:50 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:35.303 05:03:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:35.303 05:03:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.303 05:03:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 05:03:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.303 05:03:50 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:35.303 05:03:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:35.303 ************************************ 00:17:35.303 END TEST test_create_ublk 00:17:35.303 ************************************ 00:17:35.303 05:03:50 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:35.303 00:17:35.303 real 0m10.842s 00:17:35.303 user 0m0.528s 00:17:35.303 sys 0m1.413s 00:17:35.303 05:03:50 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.303 05:03:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 05:03:50 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:35.303 05:03:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.303 05:03:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.303 05:03:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 ************************************ 00:17:35.303 START TEST test_create_multi_ublk 00:17:35.303 ************************************ 00:17:35.303 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:35.303 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:35.303 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.303 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.303 [2024-11-21 05:03:50.115627] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:35.303 [2024-11-21 05:03:50.116779] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:35.303 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 [2024-11-21 05:03:50.223739] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:35.304 [2024-11-21 05:03:50.224057] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:35.304 [2024-11-21 05:03:50.224071] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:35.304 [2024-11-21 05:03:50.224077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:35.304 [2024-11-21 05:03:50.247631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:35.304 [2024-11-21 05:03:50.247650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:35.304 [2024-11-21 05:03:50.259630] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:35.304 [2024-11-21 05:03:50.260140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:35.304 [2024-11-21 05:03:50.300642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 [2024-11-21 05:03:50.408730] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:35.304 [2024-11-21 05:03:50.409051] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:35.304 [2024-11-21 05:03:50.409063] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:35.304 [2024-11-21 05:03:50.409069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:35.304 [2024-11-21 05:03:50.420634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:35.304 [2024-11-21 05:03:50.420653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:35.304 [2024-11-21 05:03:50.432630] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:35.304 [2024-11-21 05:03:50.433165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:35.304 [2024-11-21 05:03:50.436300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 [2024-11-21 05:03:50.532730] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:35.304 [2024-11-21 05:03:50.533046] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:35.304 [2024-11-21 05:03:50.533058] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:35.304 [2024-11-21 05:03:50.533063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:35.304 [2024-11-21 05:03:50.544661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:35.304 [2024-11-21 05:03:50.544678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:35.304 [2024-11-21 05:03:50.556634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:35.304 [2024-11-21 05:03:50.557149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:35.304 [2024-11-21 05:03:50.560322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 [2024-11-21 05:03:50.660715] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:35.304 [2024-11-21 05:03:50.661021] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:35.304 [2024-11-21 05:03:50.661031] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:35.304 [2024-11-21 05:03:50.661038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:35.304 [2024-11-21 05:03:50.672647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:35.304 [2024-11-21 05:03:50.672669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:35.304 [2024-11-21 05:03:50.684634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:35.304 [2024-11-21 05:03:50.685152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:35.304 [2024-11-21 05:03:50.724628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:35.304 { 00:17:35.304 "ublk_device": "/dev/ublkb0", 00:17:35.304 "id": 0, 00:17:35.304 "queue_depth": 512, 00:17:35.304 "num_queues": 4, 00:17:35.304 "bdev_name": "Malloc0" 00:17:35.304 }, 00:17:35.304 { 00:17:35.304 "ublk_device": "/dev/ublkb1", 00:17:35.304 "id": 1, 00:17:35.304 "queue_depth": 512, 00:17:35.304 "num_queues": 4, 00:17:35.304 "bdev_name": "Malloc1" 00:17:35.304 }, 00:17:35.304 { 00:17:35.304 "ublk_device": "/dev/ublkb2", 00:17:35.304 "id": 2, 00:17:35.304 "queue_depth": 512, 00:17:35.304 "num_queues": 4, 00:17:35.304 "bdev_name": "Malloc2" 00:17:35.304 }, 00:17:35.304 { 00:17:35.304 "ublk_device": "/dev/ublkb3", 00:17:35.304 "id": 3, 00:17:35.304 "queue_depth": 512, 00:17:35.304 "num_queues": 4, 00:17:35.304 "bdev_name": "Malloc3" 00:17:35.304 } 00:17:35.304 ]' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:35.304 05:03:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:35.304 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:35.304 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 [2024-11-21 05:03:51.400721] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:35.305 [2024-11-21 05:03:51.440668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:35.305 [2024-11-21 05:03:51.441341] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:35.305 [2024-11-21 05:03:51.448629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:35.305 [2024-11-21 05:03:51.448875] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:35.305 [2024-11-21 05:03:51.448883] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 [2024-11-21 05:03:51.464691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:35.305 [2024-11-21 05:03:51.496663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:35.305 [2024-11-21 05:03:51.497306] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:35.305 [2024-11-21 05:03:51.504640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:35.305 [2024-11-21 05:03:51.504883] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:35.305 [2024-11-21 05:03:51.504893] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 [2024-11-21 05:03:51.520691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:35.305 [2024-11-21 05:03:51.560655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:35.305 [2024-11-21 05:03:51.561270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:35.305 [2024-11-21 05:03:51.568637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:35.305 [2024-11-21 05:03:51.568883] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:35.305 [2024-11-21 05:03:51.568889] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 [2024-11-21 05:03:51.584680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:35.305 [2024-11-21 05:03:51.620649] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:35.305 [2024-11-21 05:03:51.621226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:35.305 [2024-11-21 05:03:51.628647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:35.305 [2024-11-21 05:03:51.628869] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:35.305 [2024-11-21 05:03:51.628874] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:35.305 [2024-11-21 05:03:51.828704] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:35.305 [2024-11-21 05:03:51.829964] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:35.305 [2024-11-21 05:03:51.829997] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.305 05:03:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.305 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.305 05:03:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.305 05:03:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:35.305 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.305 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:35.564 ************************************ 00:17:35.564 END TEST test_create_multi_ublk 00:17:35.564 ************************************ 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:35.564 00:17:35.564 real 0m2.165s 00:17:35.564 user 0m0.838s 00:17:35.564 sys 0m0.134s 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.564 05:03:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:35.564 05:03:52 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:35.564 05:03:52 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:35.564 05:03:52 ublk -- ublk/ublk.sh@130 -- # killprocess 85195 00:17:35.564 05:03:52 ublk -- common/autotest_common.sh@954 -- # '[' -z 85195 ']' 00:17:35.564 05:03:52 ublk -- common/autotest_common.sh@958 -- # kill -0 85195 00:17:35.564 05:03:52 ublk -- common/autotest_common.sh@959 -- # uname 00:17:35.835 05:03:52 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.835 05:03:52 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85195 00:17:35.835 05:03:52 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.835 05:03:52 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.835 05:03:52 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85195' 00:17:35.835 killing process with pid 85195 00:17:35.835 05:03:52 ublk -- common/autotest_common.sh@973 -- # kill 85195 00:17:35.835 05:03:52 ublk -- common/autotest_common.sh@978 -- # wait 85195 00:17:35.835 [2024-11-21 05:03:52.543365] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:35.835 [2024-11-21 05:03:52.543429] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:36.123 00:17:36.123 real 0m19.149s 00:17:36.123 user 0m28.545s 00:17:36.123 sys 0m8.555s 00:17:36.123 05:03:52 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.123 05:03:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:36.123 ************************************ 00:17:36.123 END TEST ublk 00:17:36.123 ************************************ 00:17:36.123 05:03:52 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:36.123 05:03:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.123 05:03:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.123 05:03:52 -- common/autotest_common.sh@10 -- # set +x 00:17:36.123 ************************************ 00:17:36.123 START TEST ublk_recovery 00:17:36.123 ************************************ 00:17:36.123 05:03:52 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:36.383 * Looking for test storage... 00:17:36.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:36.383 05:03:52 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.383 05:03:52 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.383 05:03:52 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.383 05:03:52 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.383 05:03:52 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.383 05:03:52 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.383 05:03:52 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.383 05:03:52 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.384 05:03:52 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.384 --rc genhtml_branch_coverage=1 00:17:36.384 --rc genhtml_function_coverage=1 00:17:36.384 --rc genhtml_legend=1 00:17:36.384 --rc geninfo_all_blocks=1 00:17:36.384 --rc geninfo_unexecuted_blocks=1 00:17:36.384 00:17:36.384 ' 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.384 --rc genhtml_branch_coverage=1 00:17:36.384 --rc genhtml_function_coverage=1 00:17:36.384 --rc genhtml_legend=1 00:17:36.384 --rc geninfo_all_blocks=1 00:17:36.384 --rc geninfo_unexecuted_blocks=1 00:17:36.384 00:17:36.384 ' 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.384 --rc genhtml_branch_coverage=1 00:17:36.384 --rc genhtml_function_coverage=1 00:17:36.384 --rc genhtml_legend=1 00:17:36.384 --rc geninfo_all_blocks=1 00:17:36.384 --rc geninfo_unexecuted_blocks=1 00:17:36.384 00:17:36.384 ' 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.384 --rc genhtml_branch_coverage=1 00:17:36.384 --rc genhtml_function_coverage=1 00:17:36.384 --rc genhtml_legend=1 00:17:36.384 --rc geninfo_all_blocks=1 00:17:36.384 --rc geninfo_unexecuted_blocks=1 00:17:36.384 00:17:36.384 ' 00:17:36.384 05:03:52 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:36.384 05:03:52 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:36.384 05:03:52 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:36.384 05:03:52 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85567 00:17:36.384 05:03:52 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.384 05:03:52 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85567 00:17:36.384 05:03:52 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85567 ']' 00:17:36.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.384 05:03:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:36.384 [2024-11-21 05:03:53.039666] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:36.384 [2024-11-21 05:03:53.040210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85567 ] 00:17:36.643 [2024-11-21 05:03:53.195459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:36.643 [2024-11-21 05:03:53.219423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.643 [2024-11-21 05:03:53.219509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:37.210 05:03:53 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.210 [2024-11-21 05:03:53.878628] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:37.210 [2024-11-21 05:03:53.879955] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.210 05:03:53 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.210 malloc0 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.210 05:03:53 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.210 05:03:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:37.210 [2024-11-21 05:03:53.918734] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:37.210 [2024-11-21 05:03:53.918837] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:37.210 [2024-11-21 05:03:53.918844] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:37.210 [2024-11-21 05:03:53.918852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:37.210 [2024-11-21 05:03:53.927728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:37.210 [2024-11-21 05:03:53.927756] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:37.210 [2024-11-21 05:03:53.934627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:37.210 [2024-11-21 05:03:53.934753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:37.467 [2024-11-21 05:03:53.956642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:37.467 1 00:17:37.467 05:03:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.467 05:03:53 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:38.401 05:03:54 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85600 00:17:38.401 05:03:54 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:38.401 05:03:54 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:38.401 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:38.401 fio-3.35 00:17:38.401 Starting 1 process 00:17:43.667 05:03:59 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85567 00:17:43.667 05:03:59 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:48.945 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85567 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:48.945 05:04:04 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85705 00:17:48.945 05:04:04 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.945 05:04:04 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:48.945 05:04:04 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85705 00:17:48.945 05:04:04 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85705 ']' 00:17:48.945 05:04:04 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.945 05:04:04 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.945 05:04:04 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.945 05:04:04 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.945 05:04:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.945 [2024-11-21 05:04:05.073082] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:17:48.945 [2024-11-21 05:04:05.073230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85705 ] 00:17:48.945 [2024-11-21 05:04:05.224010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:48.945 [2024-11-21 05:04:05.248170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.945 [2024-11-21 05:04:05.248199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:49.203 05:04:05 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:49.203 [2024-11-21 05:04:05.855627] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:49.203 [2024-11-21 05:04:05.856935] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.203 05:04:05 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:49.203 malloc0 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.203 05:04:05 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:49.203 [2024-11-21 05:04:05.895762] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:49.203 [2024-11-21 05:04:05.895804] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:49.203 [2024-11-21 05:04:05.895811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:49.203 [2024-11-21 05:04:05.903658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:49.203 [2024-11-21 05:04:05.903675] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:49.203 [2024-11-21 05:04:05.903692] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:49.203 [2024-11-21 05:04:05.903758] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:49.203 1 00:17:49.203 05:04:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.203 05:04:05 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85600 00:17:49.203 [2024-11-21 05:04:05.911632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:49.203 [2024-11-21 05:04:05.917984] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:49.203 [2024-11-21 05:04:05.925819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:49.203 [2024-11-21 05:04:05.925837] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:45.435 00:18:45.435 fio_test: (groupid=0, jobs=1): err= 0: pid=85603: Thu Nov 21 05:04:55 2024 00:18:45.435 read: IOPS=25.2k, BW=98.6MiB/s (103MB/s)(5918MiB/60002msec) 00:18:45.435 slat (nsec): min=1132, max=414314, avg=5304.12, stdev=1533.32 00:18:45.435 clat (usec): min=640, max=5965.3k, avg=2508.08, stdev=40238.40 00:18:45.435 lat (usec): min=645, max=5965.3k, avg=2513.38, stdev=40238.40 00:18:45.435 clat percentiles (usec): 00:18:45.435 | 1.00th=[ 1729], 5.00th=[ 1811], 10.00th=[ 1893], 20.00th=[ 2073], 00:18:45.435 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2147], 00:18:45.435 | 70.00th=[ 2180], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3163], 00:18:45.435 | 99.00th=[ 5080], 99.50th=[ 5669], 99.90th=[ 7898], 99.95th=[ 9503], 00:18:45.435 | 99.99th=[13435] 00:18:45.435 bw ( KiB/s): min=22448, max=133208, per=100.00%, avg=111260.57, stdev=13873.16, samples=108 00:18:45.435 iops : min= 5612, max=33302, avg=27815.14, stdev=3468.28, samples=108 00:18:45.435 write: IOPS=25.2k, BW=98.5MiB/s (103MB/s)(5910MiB/60002msec); 0 zone resets 00:18:45.435 slat (nsec): min=1073, max=354996, avg=5583.22, stdev=1550.37 00:18:45.435 clat (usec): min=641, max=5965.6k, avg=2552.94, stdev=37233.21 00:18:45.435 lat (usec): min=647, max=5965.6k, avg=2558.52, stdev=37233.21 00:18:45.435 clat percentiles (usec): 00:18:45.435 | 1.00th=[ 1795], 5.00th=[ 1893], 10.00th=[ 1975], 20.00th=[ 2180], 00:18:45.435 | 30.00th=[ 2212], 40.00th=[ 2212], 50.00th=[ 2245], 60.00th=[ 2245], 00:18:45.435 | 70.00th=[ 2278], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 3064], 00:18:45.435 | 99.00th=[ 5145], 99.50th=[ 5735], 99.90th=[ 7832], 99.95th=[ 9110], 00:18:45.435 | 99.99th=[13435] 00:18:45.435 bw ( KiB/s): min=21800, max=131832, per=100.00%, avg=111111.55, stdev=14013.73, samples=108 00:18:45.435 iops : min= 5450, max=32958, avg=27777.88, stdev=3503.42, samples=108 00:18:45.435 lat (usec) : 750=0.01%, 1000=0.01% 00:18:45.435 lat (msec) : 2=11.74%, 4=85.56%, 10=2.65%, 20=0.04%, >=2000=0.01% 00:18:45.435 cpu : usr=5.54%, sys=28.31%, ctx=100209, majf=0, minf=13 00:18:45.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:45.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.435 issued rwts: total=1514989,1512995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.435 00:18:45.435 Run status group 0 (all jobs): 00:18:45.435 READ: bw=98.6MiB/s (103MB/s), 98.6MiB/s-98.6MiB/s (103MB/s-103MB/s), io=5918MiB (6205MB), run=60002-60002msec 00:18:45.435 WRITE: bw=98.5MiB/s (103MB/s), 98.5MiB/s-98.5MiB/s (103MB/s-103MB/s), io=5910MiB (6197MB), run=60002-60002msec 00:18:45.435 00:18:45.435 Disk stats (read/write): 00:18:45.435 ublkb1: ios=1511955/1509859, merge=0/0, ticks=3710781/3647795, in_queue=7358576, util=99.89% 00:18:45.435 05:04:55 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 [2024-11-21 05:04:55.231442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:45.435 [2024-11-21 05:04:55.268647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:45.435 [2024-11-21 05:04:55.268834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:45.435 [2024-11-21 05:04:55.281644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:45.435 [2024-11-21 05:04:55.281749] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:45.435 [2024-11-21 05:04:55.281757] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.435 05:04:55 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 [2024-11-21 05:04:55.285807] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:45.435 [2024-11-21 05:04:55.287481] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:45.435 [2024-11-21 05:04:55.287513] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.435 05:04:55 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:45.435 05:04:55 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:45.435 05:04:55 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85705 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85705 ']' 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85705 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85705 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.435 killing process with pid 85705 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85705' 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85705 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85705 00:18:45.435 [2024-11-21 05:04:55.491127] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:45.435 [2024-11-21 05:04:55.491177] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:45.435 ************************************ 00:18:45.435 END TEST ublk_recovery 00:18:45.435 ************************************ 00:18:45.435 00:18:45.435 real 1m2.949s 00:18:45.435 user 1m36.917s 00:18:45.435 sys 0m38.666s 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.435 05:04:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 05:04:55 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:45.435 05:04:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:45.435 05:04:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.435 05:04:55 -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 05:04:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:45.435 05:04:55 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:45.436 05:04:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:45.436 05:04:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.436 05:04:55 -- common/autotest_common.sh@10 -- # set +x 00:18:45.436 ************************************ 00:18:45.436 START TEST ftl 00:18:45.436 ************************************ 00:18:45.436 05:04:55 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:45.436 * Looking for test storage... 00:18:45.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.436 05:04:55 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.436 05:04:55 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.436 05:04:55 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.436 05:04:56 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.436 05:04:56 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.436 05:04:56 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.436 05:04:56 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.436 05:04:56 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.436 05:04:56 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.436 05:04:56 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.436 05:04:56 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:45.436 05:04:56 ftl -- scripts/common.sh@345 -- # : 1 00:18:45.436 05:04:56 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.436 05:04:56 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.436 05:04:56 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:45.436 05:04:56 ftl -- scripts/common.sh@353 -- # local d=1 00:18:45.436 05:04:56 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.436 05:04:56 ftl -- scripts/common.sh@355 -- # echo 1 00:18:45.436 05:04:56 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.436 05:04:56 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@353 -- # local d=2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.436 05:04:56 ftl -- scripts/common.sh@355 -- # echo 2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.436 05:04:56 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.436 05:04:56 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.436 05:04:56 ftl -- scripts/common.sh@368 -- # return 0 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.436 --rc genhtml_branch_coverage=1 00:18:45.436 --rc genhtml_function_coverage=1 00:18:45.436 --rc genhtml_legend=1 00:18:45.436 --rc geninfo_all_blocks=1 00:18:45.436 --rc geninfo_unexecuted_blocks=1 00:18:45.436 00:18:45.436 ' 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.436 --rc genhtml_branch_coverage=1 00:18:45.436 --rc genhtml_function_coverage=1 00:18:45.436 --rc genhtml_legend=1 00:18:45.436 --rc geninfo_all_blocks=1 00:18:45.436 --rc geninfo_unexecuted_blocks=1 00:18:45.436 00:18:45.436 ' 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.436 --rc genhtml_branch_coverage=1 00:18:45.436 --rc genhtml_function_coverage=1 00:18:45.436 --rc genhtml_legend=1 00:18:45.436 --rc geninfo_all_blocks=1 00:18:45.436 --rc geninfo_unexecuted_blocks=1 00:18:45.436 00:18:45.436 ' 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.436 --rc genhtml_branch_coverage=1 00:18:45.436 --rc genhtml_function_coverage=1 00:18:45.436 --rc genhtml_legend=1 00:18:45.436 --rc geninfo_all_blocks=1 00:18:45.436 --rc geninfo_unexecuted_blocks=1 00:18:45.436 00:18:45.436 ' 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:45.436 05:04:56 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:45.436 05:04:56 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.436 05:04:56 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.436 05:04:56 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:45.436 05:04:56 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:45.436 05:04:56 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.436 05:04:56 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:45.436 05:04:56 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:45.436 05:04:56 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.436 05:04:56 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.436 05:04:56 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:45.436 05:04:56 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:45.436 05:04:56 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.436 05:04:56 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.436 05:04:56 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:45.436 05:04:56 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:45.436 05:04:56 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.436 05:04:56 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.436 05:04:56 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:45.436 05:04:56 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:45.436 05:04:56 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.436 05:04:56 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.436 05:04:56 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.436 05:04:56 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.436 05:04:56 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:45.436 05:04:56 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:45.436 05:04:56 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.436 05:04:56 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:45.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:45.436 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:45.436 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:45.436 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:45.436 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=86507 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@38 -- # waitforlisten 86507 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@835 -- # '[' -z 86507 ']' 00:18:45.436 05:04:56 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.436 05:04:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:45.436 [2024-11-21 05:04:56.601244] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:18:45.436 [2024-11-21 05:04:56.601349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86507 ] 00:18:45.436 [2024-11-21 05:04:56.751485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.436 [2024-11-21 05:04:56.769040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.436 05:04:57 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.436 05:04:57 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:45.437 05:04:57 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:45.437 05:04:57 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:45.437 05:04:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:45.437 05:04:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@50 -- # break 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@63 -- # break 00:18:45.437 05:04:58 ftl -- ftl/ftl.sh@66 -- # killprocess 86507 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@954 -- # '[' -z 86507 ']' 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@958 -- # kill -0 86507 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@959 -- # uname 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86507 00:18:45.437 killing process with pid 86507 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86507' 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@973 -- # kill 86507 00:18:45.437 05:04:58 ftl -- common/autotest_common.sh@978 -- # wait 86507 00:18:45.437 05:04:59 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:45.437 05:04:59 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:45.437 05:04:59 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:45.437 05:04:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.437 05:04:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:45.437 ************************************ 00:18:45.437 START TEST ftl_fio_basic 00:18:45.437 ************************************ 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:45.437 * Looking for test storage... 00:18:45.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.437 --rc genhtml_branch_coverage=1 00:18:45.437 --rc genhtml_function_coverage=1 00:18:45.437 --rc genhtml_legend=1 00:18:45.437 --rc geninfo_all_blocks=1 00:18:45.437 --rc geninfo_unexecuted_blocks=1 00:18:45.437 00:18:45.437 ' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.437 --rc genhtml_branch_coverage=1 00:18:45.437 --rc genhtml_function_coverage=1 00:18:45.437 --rc genhtml_legend=1 00:18:45.437 --rc geninfo_all_blocks=1 00:18:45.437 --rc geninfo_unexecuted_blocks=1 00:18:45.437 00:18:45.437 ' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.437 --rc genhtml_branch_coverage=1 00:18:45.437 --rc genhtml_function_coverage=1 00:18:45.437 --rc genhtml_legend=1 00:18:45.437 --rc geninfo_all_blocks=1 00:18:45.437 --rc geninfo_unexecuted_blocks=1 00:18:45.437 00:18:45.437 ' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.437 --rc genhtml_branch_coverage=1 00:18:45.437 --rc genhtml_function_coverage=1 00:18:45.437 --rc genhtml_legend=1 00:18:45.437 --rc geninfo_all_blocks=1 00:18:45.437 --rc geninfo_unexecuted_blocks=1 00:18:45.437 00:18:45.437 ' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:45.437 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86623 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86623 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86623 ']' 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.438 05:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:45.438 [2024-11-21 05:04:59.295114] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:18:45.438 [2024-11-21 05:04:59.295391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86623 ] 00:18:45.438 [2024-11-21 05:04:59.441260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.438 [2024-11-21 05:04:59.459817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.438 [2024-11-21 05:04:59.460090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.438 [2024-11-21 05:04:59.460039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:45.438 { 00:18:45.438 "name": "nvme0n1", 00:18:45.438 "aliases": [ 00:18:45.438 "00b3be39-23d8-4fbe-8573-87dc568452b3" 00:18:45.438 ], 00:18:45.438 "product_name": "NVMe disk", 00:18:45.438 "block_size": 4096, 00:18:45.438 "num_blocks": 1310720, 00:18:45.438 "uuid": "00b3be39-23d8-4fbe-8573-87dc568452b3", 00:18:45.438 "numa_id": -1, 00:18:45.438 "assigned_rate_limits": { 00:18:45.438 "rw_ios_per_sec": 0, 00:18:45.438 "rw_mbytes_per_sec": 0, 00:18:45.438 "r_mbytes_per_sec": 0, 00:18:45.438 "w_mbytes_per_sec": 0 00:18:45.438 }, 00:18:45.438 "claimed": false, 00:18:45.438 "zoned": false, 00:18:45.438 "supported_io_types": { 00:18:45.438 "read": true, 00:18:45.438 "write": true, 00:18:45.438 "unmap": true, 00:18:45.438 "flush": true, 00:18:45.438 "reset": true, 00:18:45.438 "nvme_admin": true, 00:18:45.438 "nvme_io": true, 00:18:45.438 "nvme_io_md": false, 00:18:45.438 "write_zeroes": true, 00:18:45.438 "zcopy": false, 00:18:45.438 "get_zone_info": false, 00:18:45.438 "zone_management": false, 00:18:45.438 "zone_append": false, 00:18:45.438 "compare": true, 00:18:45.438 "compare_and_write": false, 00:18:45.438 "abort": true, 00:18:45.438 "seek_hole": false, 00:18:45.438 "seek_data": false, 00:18:45.438 "copy": true, 00:18:45.438 "nvme_iov_md": false 00:18:45.438 }, 00:18:45.438 "driver_specific": { 00:18:45.438 "nvme": [ 00:18:45.438 { 00:18:45.438 "pci_address": "0000:00:11.0", 00:18:45.438 "trid": { 00:18:45.438 "trtype": "PCIe", 00:18:45.438 "traddr": "0000:00:11.0" 00:18:45.438 }, 00:18:45.438 "ctrlr_data": { 00:18:45.438 "cntlid": 0, 00:18:45.438 "vendor_id": "0x1b36", 00:18:45.438 "model_number": "QEMU NVMe Ctrl", 00:18:45.438 "serial_number": "12341", 00:18:45.438 "firmware_revision": "8.0.0", 00:18:45.438 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:45.438 "oacs": { 00:18:45.438 "security": 0, 00:18:45.438 "format": 1, 00:18:45.438 "firmware": 0, 00:18:45.438 "ns_manage": 1 00:18:45.438 }, 00:18:45.438 "multi_ctrlr": false, 00:18:45.438 "ana_reporting": false 00:18:45.438 }, 00:18:45.438 "vs": { 00:18:45.438 "nvme_version": "1.4" 00:18:45.438 }, 00:18:45.438 "ns_data": { 00:18:45.438 "id": 1, 00:18:45.438 "can_share": false 00:18:45.438 } 00:18:45.438 } 00:18:45.438 ], 00:18:45.438 "mp_policy": "active_passive" 00:18:45.438 } 00:18:45.438 } 00:18:45.438 ]' 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:45.438 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:45.439 05:05:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=53055908-0e5c-4f82-b988-f014e5c97d90 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 53055908-0e5c-4f82-b988-f014e5c97d90 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:45.439 { 00:18:45.439 "name": "6f1622cb-821f-48ff-be2a-49ad90152ea0", 00:18:45.439 "aliases": [ 00:18:45.439 "lvs/nvme0n1p0" 00:18:45.439 ], 00:18:45.439 "product_name": "Logical Volume", 00:18:45.439 "block_size": 4096, 00:18:45.439 "num_blocks": 26476544, 00:18:45.439 "uuid": "6f1622cb-821f-48ff-be2a-49ad90152ea0", 00:18:45.439 "assigned_rate_limits": { 00:18:45.439 "rw_ios_per_sec": 0, 00:18:45.439 "rw_mbytes_per_sec": 0, 00:18:45.439 "r_mbytes_per_sec": 0, 00:18:45.439 "w_mbytes_per_sec": 0 00:18:45.439 }, 00:18:45.439 "claimed": false, 00:18:45.439 "zoned": false, 00:18:45.439 "supported_io_types": { 00:18:45.439 "read": true, 00:18:45.439 "write": true, 00:18:45.439 "unmap": true, 00:18:45.439 "flush": false, 00:18:45.439 "reset": true, 00:18:45.439 "nvme_admin": false, 00:18:45.439 "nvme_io": false, 00:18:45.439 "nvme_io_md": false, 00:18:45.439 "write_zeroes": true, 00:18:45.439 "zcopy": false, 00:18:45.439 "get_zone_info": false, 00:18:45.439 "zone_management": false, 00:18:45.439 "zone_append": false, 00:18:45.439 "compare": false, 00:18:45.439 "compare_and_write": false, 00:18:45.439 "abort": false, 00:18:45.439 "seek_hole": true, 00:18:45.439 "seek_data": true, 00:18:45.439 "copy": false, 00:18:45.439 "nvme_iov_md": false 00:18:45.439 }, 00:18:45.439 "driver_specific": { 00:18:45.439 "lvol": { 00:18:45.439 "lvol_store_uuid": "53055908-0e5c-4f82-b988-f014e5c97d90", 00:18:45.439 "base_bdev": "nvme0n1", 00:18:45.439 "thin_provision": true, 00:18:45.439 "num_allocated_clusters": 0, 00:18:45.439 "snapshot": false, 00:18:45.439 "clone": false, 00:18:45.439 "esnap_clone": false 00:18:45.439 } 00:18:45.439 } 00:18:45.439 } 00:18:45.439 ]' 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:45.439 { 00:18:45.439 "name": "6f1622cb-821f-48ff-be2a-49ad90152ea0", 00:18:45.439 "aliases": [ 00:18:45.439 "lvs/nvme0n1p0" 00:18:45.439 ], 00:18:45.439 "product_name": "Logical Volume", 00:18:45.439 "block_size": 4096, 00:18:45.439 "num_blocks": 26476544, 00:18:45.439 "uuid": "6f1622cb-821f-48ff-be2a-49ad90152ea0", 00:18:45.439 "assigned_rate_limits": { 00:18:45.439 "rw_ios_per_sec": 0, 00:18:45.439 "rw_mbytes_per_sec": 0, 00:18:45.439 "r_mbytes_per_sec": 0, 00:18:45.439 "w_mbytes_per_sec": 0 00:18:45.439 }, 00:18:45.439 "claimed": false, 00:18:45.439 "zoned": false, 00:18:45.439 "supported_io_types": { 00:18:45.439 "read": true, 00:18:45.439 "write": true, 00:18:45.439 "unmap": true, 00:18:45.439 "flush": false, 00:18:45.439 "reset": true, 00:18:45.439 "nvme_admin": false, 00:18:45.439 "nvme_io": false, 00:18:45.439 "nvme_io_md": false, 00:18:45.439 "write_zeroes": true, 00:18:45.439 "zcopy": false, 00:18:45.439 "get_zone_info": false, 00:18:45.439 "zone_management": false, 00:18:45.439 "zone_append": false, 00:18:45.439 "compare": false, 00:18:45.439 "compare_and_write": false, 00:18:45.439 "abort": false, 00:18:45.439 "seek_hole": true, 00:18:45.439 "seek_data": true, 00:18:45.439 "copy": false, 00:18:45.439 "nvme_iov_md": false 00:18:45.439 }, 00:18:45.439 "driver_specific": { 00:18:45.439 "lvol": { 00:18:45.439 "lvol_store_uuid": "53055908-0e5c-4f82-b988-f014e5c97d90", 00:18:45.439 "base_bdev": "nvme0n1", 00:18:45.439 "thin_provision": true, 00:18:45.439 "num_allocated_clusters": 0, 00:18:45.439 "snapshot": false, 00:18:45.439 "clone": false, 00:18:45.439 "esnap_clone": false 00:18:45.439 } 00:18:45.439 } 00:18:45.439 } 00:18:45.439 ]' 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:45.439 05:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:45.439 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:45.439 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:45.439 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:45.439 05:05:02 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:45.439 05:05:02 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:45.699 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f1622cb-821f-48ff-be2a-49ad90152ea0 00:18:45.699 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:45.699 { 00:18:45.699 "name": "6f1622cb-821f-48ff-be2a-49ad90152ea0", 00:18:45.699 "aliases": [ 00:18:45.699 "lvs/nvme0n1p0" 00:18:45.699 ], 00:18:45.699 "product_name": "Logical Volume", 00:18:45.699 "block_size": 4096, 00:18:45.699 "num_blocks": 26476544, 00:18:45.699 "uuid": "6f1622cb-821f-48ff-be2a-49ad90152ea0", 00:18:45.699 "assigned_rate_limits": { 00:18:45.699 "rw_ios_per_sec": 0, 00:18:45.699 "rw_mbytes_per_sec": 0, 00:18:45.699 "r_mbytes_per_sec": 0, 00:18:45.699 "w_mbytes_per_sec": 0 00:18:45.699 }, 00:18:45.699 "claimed": false, 00:18:45.699 "zoned": false, 00:18:45.699 "supported_io_types": { 00:18:45.699 "read": true, 00:18:45.699 "write": true, 00:18:45.699 "unmap": true, 00:18:45.699 "flush": false, 00:18:45.699 "reset": true, 00:18:45.699 "nvme_admin": false, 00:18:45.699 "nvme_io": false, 00:18:45.699 "nvme_io_md": false, 00:18:45.699 "write_zeroes": true, 00:18:45.699 "zcopy": false, 00:18:45.699 "get_zone_info": false, 00:18:45.699 "zone_management": false, 00:18:45.699 "zone_append": false, 00:18:45.699 "compare": false, 00:18:45.699 "compare_and_write": false, 00:18:45.699 "abort": false, 00:18:45.699 "seek_hole": true, 00:18:45.699 "seek_data": true, 00:18:45.699 "copy": false, 00:18:45.699 "nvme_iov_md": false 00:18:45.699 }, 00:18:45.699 "driver_specific": { 00:18:45.699 "lvol": { 00:18:45.699 "lvol_store_uuid": "53055908-0e5c-4f82-b988-f014e5c97d90", 00:18:45.699 "base_bdev": "nvme0n1", 00:18:45.699 "thin_provision": true, 00:18:45.699 "num_allocated_clusters": 0, 00:18:45.699 "snapshot": false, 00:18:45.699 "clone": false, 00:18:45.699 "esnap_clone": false 00:18:45.699 } 00:18:45.699 } 00:18:45.699 } 00:18:45.699 ]' 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:45.958 05:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6f1622cb-821f-48ff-be2a-49ad90152ea0 -c nvc0n1p0 --l2p_dram_limit 60 00:18:45.958 [2024-11-21 05:05:02.676913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.676949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:45.958 [2024-11-21 05:05:02.676959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:45.958 [2024-11-21 05:05:02.676967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.677029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.677038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.958 [2024-11-21 05:05:02.677045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:45.958 [2024-11-21 05:05:02.677054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.677085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:45.958 [2024-11-21 05:05:02.677297] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:45.958 [2024-11-21 05:05:02.677310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.677318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.958 [2024-11-21 05:05:02.677324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:18:45.958 [2024-11-21 05:05:02.677340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.677378] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 621b6d9b-b6ad-47cb-a413-11a5b968cfef 00:18:45.958 [2024-11-21 05:05:02.678361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.678383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:45.958 [2024-11-21 05:05:02.678393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:45.958 [2024-11-21 05:05:02.678401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.683194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.683221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.958 [2024-11-21 05:05:02.683231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.703 ms 00:18:45.958 [2024-11-21 05:05:02.683239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.683318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.683327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.958 [2024-11-21 05:05:02.683345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:45.958 [2024-11-21 05:05:02.683351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.683404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.683412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:45.958 [2024-11-21 05:05:02.683420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:45.958 [2024-11-21 05:05:02.683426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.683453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:45.958 [2024-11-21 05:05:02.684719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.684745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.958 [2024-11-21 05:05:02.684752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:18:45.958 [2024-11-21 05:05:02.684760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.684795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.958 [2024-11-21 05:05:02.684804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:45.958 [2024-11-21 05:05:02.684810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:45.958 [2024-11-21 05:05:02.684819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.958 [2024-11-21 05:05:02.684854] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:45.958 [2024-11-21 05:05:02.684986] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:45.958 [2024-11-21 05:05:02.684996] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:45.958 [2024-11-21 05:05:02.685006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:45.958 [2024-11-21 05:05:02.685014] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:45.958 [2024-11-21 05:05:02.685024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:45.958 [2024-11-21 05:05:02.685031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:45.959 [2024-11-21 05:05:02.685039] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:45.959 [2024-11-21 05:05:02.685046] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:45.959 [2024-11-21 05:05:02.685052] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:45.959 [2024-11-21 05:05:02.685059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.959 [2024-11-21 05:05:02.685066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:45.959 [2024-11-21 05:05:02.685072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:18:45.959 [2024-11-21 05:05:02.685079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.959 [2024-11-21 05:05:02.685167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.959 [2024-11-21 05:05:02.685178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:45.959 [2024-11-21 05:05:02.685186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:45.959 [2024-11-21 05:05:02.685194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.959 [2024-11-21 05:05:02.685288] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:45.959 [2024-11-21 05:05:02.685297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:45.959 [2024-11-21 05:05:02.685303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:45.959 [2024-11-21 05:05:02.685323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:45.959 [2024-11-21 05:05:02.685341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.959 [2024-11-21 05:05:02.685353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:45.959 [2024-11-21 05:05:02.685360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:45.959 [2024-11-21 05:05:02.685365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.959 [2024-11-21 05:05:02.685375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:45.959 [2024-11-21 05:05:02.685382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:45.959 [2024-11-21 05:05:02.685390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:45.959 [2024-11-21 05:05:02.685415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:45.959 [2024-11-21 05:05:02.685435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:45.959 [2024-11-21 05:05:02.685457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:45.959 [2024-11-21 05:05:02.685477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:45.959 [2024-11-21 05:05:02.685499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:45.959 [2024-11-21 05:05:02.685520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.959 [2024-11-21 05:05:02.685538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:45.959 [2024-11-21 05:05:02.685546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:45.959 [2024-11-21 05:05:02.685552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.959 [2024-11-21 05:05:02.685560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:45.959 [2024-11-21 05:05:02.685567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:45.959 [2024-11-21 05:05:02.685575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:45.959 [2024-11-21 05:05:02.685588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:45.959 [2024-11-21 05:05:02.685593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685601] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:45.959 [2024-11-21 05:05:02.685618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:45.959 [2024-11-21 05:05:02.685628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.959 [2024-11-21 05:05:02.685645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:45.959 [2024-11-21 05:05:02.685651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:45.959 [2024-11-21 05:05:02.685659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:45.959 [2024-11-21 05:05:02.685665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:45.959 [2024-11-21 05:05:02.685672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:45.959 [2024-11-21 05:05:02.685678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:45.959 [2024-11-21 05:05:02.685689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:45.959 [2024-11-21 05:05:02.685697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.959 [2024-11-21 05:05:02.685706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:45.959 [2024-11-21 05:05:02.685713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:45.959 [2024-11-21 05:05:02.685721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:45.959 [2024-11-21 05:05:02.685727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:45.959 [2024-11-21 05:05:02.685735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:45.959 [2024-11-21 05:05:02.685741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:45.959 [2024-11-21 05:05:02.685752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:45.959 [2024-11-21 05:05:02.685758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:45.959 [2024-11-21 05:05:02.685765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:45.959 [2024-11-21 05:05:02.685772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:45.959 [2024-11-21 05:05:02.685779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:45.959 [2024-11-21 05:05:02.685786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:45.959 [2024-11-21 05:05:02.685793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:45.959 [2024-11-21 05:05:02.685798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:45.959 [2024-11-21 05:05:02.685805] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:45.959 [2024-11-21 05:05:02.685813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.959 [2024-11-21 05:05:02.685820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:45.959 [2024-11-21 05:05:02.685826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:45.959 [2024-11-21 05:05:02.685834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:45.960 [2024-11-21 05:05:02.685840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:45.960 [2024-11-21 05:05:02.685846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.960 [2024-11-21 05:05:02.685852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:45.960 [2024-11-21 05:05:02.685861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:18:45.960 [2024-11-21 05:05:02.685866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.960 [2024-11-21 05:05:02.685932] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:45.960 [2024-11-21 05:05:02.685941] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:49.243 [2024-11-21 05:05:05.240371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.243 [2024-11-21 05:05:05.240532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:49.243 [2024-11-21 05:05:05.240623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2554.426 ms 00:18:49.243 [2024-11-21 05:05:05.240647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.243 [2024-11-21 05:05:05.248021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.243 [2024-11-21 05:05:05.248152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.243 [2024-11-21 05:05:05.248212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.292 ms 00:18:49.243 [2024-11-21 05:05:05.248231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.243 [2024-11-21 05:05:05.248341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.243 [2024-11-21 05:05:05.248374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:49.243 [2024-11-21 05:05:05.248427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:49.243 [2024-11-21 05:05:05.248445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.243 [2024-11-21 05:05:05.264213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.264338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.244 [2024-11-21 05:05:05.264401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.708 ms 00:18:49.244 [2024-11-21 05:05:05.264428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.264492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.264511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.244 [2024-11-21 05:05:05.264529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:49.244 [2024-11-21 05:05:05.264543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.264906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.265000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.244 [2024-11-21 05:05:05.265079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:49.244 [2024-11-21 05:05:05.265100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.265230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.265349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.244 [2024-11-21 05:05:05.265381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:49.244 [2024-11-21 05:05:05.265396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.270435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.270567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.244 [2024-11-21 05:05:05.270654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.996 ms 00:18:49.244 [2024-11-21 05:05:05.270699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.281051] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:49.244 [2024-11-21 05:05:05.293095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.293197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:49.244 [2024-11-21 05:05:05.293235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.156 ms 00:18:49.244 [2024-11-21 05:05:05.293254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.329981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.330084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:49.244 [2024-11-21 05:05:05.330125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.686 ms 00:18:49.244 [2024-11-21 05:05:05.330147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.330301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.330360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:49.244 [2024-11-21 05:05:05.330380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:18:49.244 [2024-11-21 05:05:05.330397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.333018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.333118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:49.244 [2024-11-21 05:05:05.333243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:18:49.244 [2024-11-21 05:05:05.333255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.335317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.335343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:49.244 [2024-11-21 05:05:05.335352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:18:49.244 [2024-11-21 05:05:05.335360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.335631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.335646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:49.244 [2024-11-21 05:05:05.335655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:18:49.244 [2024-11-21 05:05:05.335675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.358228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.358340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:49.244 [2024-11-21 05:05:05.358351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.518 ms 00:18:49.244 [2024-11-21 05:05:05.358368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.361732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.361775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:49.244 [2024-11-21 05:05:05.361795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:18:49.244 [2024-11-21 05:05:05.361880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.364288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.364378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:49.244 [2024-11-21 05:05:05.364421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:18:49.244 [2024-11-21 05:05:05.364439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.366925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.367028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:49.244 [2024-11-21 05:05:05.367077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.442 ms 00:18:49.244 [2024-11-21 05:05:05.367100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.367182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.367206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:49.244 [2024-11-21 05:05:05.367226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:49.244 [2024-11-21 05:05:05.367281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.367360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.244 [2024-11-21 05:05:05.367459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:49.244 [2024-11-21 05:05:05.367479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:49.244 [2024-11-21 05:05:05.367517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.244 [2024-11-21 05:05:05.368350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2691.091 ms, result 0 00:18:49.244 { 00:18:49.244 "name": "ftl0", 00:18:49.244 "uuid": "621b6d9b-b6ad-47cb-a413-11a5b968cfef" 00:18:49.244 } 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:49.244 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:49.244 [ 00:18:49.244 { 00:18:49.244 "name": "ftl0", 00:18:49.244 "aliases": [ 00:18:49.244 "621b6d9b-b6ad-47cb-a413-11a5b968cfef" 00:18:49.244 ], 00:18:49.244 "product_name": "FTL disk", 00:18:49.244 "block_size": 4096, 00:18:49.244 "num_blocks": 20971520, 00:18:49.244 "uuid": "621b6d9b-b6ad-47cb-a413-11a5b968cfef", 00:18:49.244 "assigned_rate_limits": { 00:18:49.244 "rw_ios_per_sec": 0, 00:18:49.244 "rw_mbytes_per_sec": 0, 00:18:49.244 "r_mbytes_per_sec": 0, 00:18:49.244 "w_mbytes_per_sec": 0 00:18:49.244 }, 00:18:49.244 "claimed": false, 00:18:49.244 "zoned": false, 00:18:49.244 "supported_io_types": { 00:18:49.244 "read": true, 00:18:49.244 "write": true, 00:18:49.244 "unmap": true, 00:18:49.244 "flush": true, 00:18:49.244 "reset": false, 00:18:49.244 "nvme_admin": false, 00:18:49.244 "nvme_io": false, 00:18:49.244 "nvme_io_md": false, 00:18:49.244 "write_zeroes": true, 00:18:49.244 "zcopy": false, 00:18:49.244 "get_zone_info": false, 00:18:49.244 "zone_management": false, 00:18:49.244 "zone_append": false, 00:18:49.244 "compare": false, 00:18:49.244 "compare_and_write": false, 00:18:49.244 "abort": false, 00:18:49.244 "seek_hole": false, 00:18:49.244 "seek_data": false, 00:18:49.244 "copy": false, 00:18:49.244 "nvme_iov_md": false 00:18:49.244 }, 00:18:49.244 "driver_specific": { 00:18:49.244 "ftl": { 00:18:49.244 "base_bdev": "6f1622cb-821f-48ff-be2a-49ad90152ea0", 00:18:49.244 "cache": "nvc0n1p0" 00:18:49.245 } 00:18:49.245 } 00:18:49.245 } 00:18:49.245 ] 00:18:49.245 05:05:05 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:49.245 05:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:49.245 05:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:49.505 05:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:49.505 05:05:05 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:49.505 [2024-11-21 05:05:06.174865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.174894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:49.505 [2024-11-21 05:05:06.174914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:49.505 [2024-11-21 05:05:06.174920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.174949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:49.505 [2024-11-21 05:05:06.175357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.175374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:49.505 [2024-11-21 05:05:06.175384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:18:49.505 [2024-11-21 05:05:06.175391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.175911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.175941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:49.505 [2024-11-21 05:05:06.175949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:18:49.505 [2024-11-21 05:05:06.175957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.178531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.178551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:49.505 [2024-11-21 05:05:06.178559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:18:49.505 [2024-11-21 05:05:06.178577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.183142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.183168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:49.505 [2024-11-21 05:05:06.183176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.534 ms 00:18:49.505 [2024-11-21 05:05:06.183184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.184593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.184636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:49.505 [2024-11-21 05:05:06.184643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:18:49.505 [2024-11-21 05:05:06.184650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.188337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.188452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:49.505 [2024-11-21 05:05:06.188466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:18:49.505 [2024-11-21 05:05:06.188474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.188632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.188642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:49.505 [2024-11-21 05:05:06.188650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:49.505 [2024-11-21 05:05:06.188657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.189944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.189970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:49.505 [2024-11-21 05:05:06.189977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:18:49.505 [2024-11-21 05:05:06.189984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.190956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.190986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:49.505 [2024-11-21 05:05:06.190993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:18:49.505 [2024-11-21 05:05:06.191002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.191741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.191769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:49.505 [2024-11-21 05:05:06.191776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:18:49.505 [2024-11-21 05:05:06.191783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.192504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.505 [2024-11-21 05:05:06.192533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:49.505 [2024-11-21 05:05:06.192540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:18:49.505 [2024-11-21 05:05:06.192547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.505 [2024-11-21 05:05:06.192586] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:49.505 [2024-11-21 05:05:06.192599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.192996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:49.506 [2024-11-21 05:05:06.193129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:49.507 [2024-11-21 05:05:06.193325] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:49.507 [2024-11-21 05:05:06.193332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 621b6d9b-b6ad-47cb-a413-11a5b968cfef 00:18:49.507 [2024-11-21 05:05:06.193350] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:49.507 [2024-11-21 05:05:06.193363] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:49.507 [2024-11-21 05:05:06.193370] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:49.507 [2024-11-21 05:05:06.193375] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:49.507 [2024-11-21 05:05:06.193382] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:49.507 [2024-11-21 05:05:06.193389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:49.507 [2024-11-21 05:05:06.193396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:49.507 [2024-11-21 05:05:06.193401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:49.507 [2024-11-21 05:05:06.193407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:49.507 [2024-11-21 05:05:06.193412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.507 [2024-11-21 05:05:06.193419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:49.507 [2024-11-21 05:05:06.193425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:18:49.507 [2024-11-21 05:05:06.193434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.194593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.507 [2024-11-21 05:05:06.194636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:49.507 [2024-11-21 05:05:06.194644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:18:49.507 [2024-11-21 05:05:06.194652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.194728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.507 [2024-11-21 05:05:06.194738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:49.507 [2024-11-21 05:05:06.194744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:49.507 [2024-11-21 05:05:06.194753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.199267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.199296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.507 [2024-11-21 05:05:06.199304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.199312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.199363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.199372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.507 [2024-11-21 05:05:06.199379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.199389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.199450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.199462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.507 [2024-11-21 05:05:06.199468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.199475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.199499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.199506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.507 [2024-11-21 05:05:06.199513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.199520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.207824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.207856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.507 [2024-11-21 05:05:06.207864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.207872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.214600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.214641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.507 [2024-11-21 05:05:06.214649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.214657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.214721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.214733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.507 [2024-11-21 05:05:06.214739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.214746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.214807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.214818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.507 [2024-11-21 05:05:06.214826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.214833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.214909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.214921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.507 [2024-11-21 05:05:06.214927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.214934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.214981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.214995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:49.507 [2024-11-21 05:05:06.215002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.215010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.215048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.507 [2024-11-21 05:05:06.215075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.507 [2024-11-21 05:05:06.215081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.507 [2024-11-21 05:05:06.215088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.507 [2024-11-21 05:05:06.215132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.508 [2024-11-21 05:05:06.215142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.508 [2024-11-21 05:05:06.215159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.508 [2024-11-21 05:05:06.215166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.508 [2024-11-21 05:05:06.215320] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.437 ms, result 0 00:18:49.508 true 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86623 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86623 ']' 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86623 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86623 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.766 killing process with pid 86623 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.766 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86623' 00:18:49.767 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86623 00:18:49.767 05:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86623 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:57.875 05:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:57.875 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:57.875 fio-3.35 00:18:57.875 Starting 1 thread 00:19:02.077 00:19:02.077 test: (groupid=0, jobs=1): err= 0: pid=86786: Thu Nov 21 05:05:18 2024 00:19:02.077 read: IOPS=1003, BW=66.6MiB/s (69.8MB/s)(255MiB/3821msec) 00:19:02.077 slat (nsec): min=3974, max=44130, avg=6836.75, stdev=3402.70 00:19:02.077 clat (usec): min=230, max=1201, avg=446.81, stdev=163.20 00:19:02.077 lat (usec): min=235, max=1210, avg=453.65, stdev=164.94 00:19:02.077 clat percentiles (usec): 00:19:02.077 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 306], 20.00th=[ 322], 00:19:02.077 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 449], 00:19:02.077 | 70.00th=[ 529], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 824], 00:19:02.077 | 99.00th=[ 988], 99.50th=[ 1045], 99.90th=[ 1172], 99.95th=[ 1188], 00:19:02.077 | 99.99th=[ 1205] 00:19:02.077 write: IOPS=1009, BW=67.1MiB/s (70.3MB/s)(256MiB/3818msec); 0 zone resets 00:19:02.077 slat (usec): min=14, max=102, avg=21.41, stdev= 5.91 00:19:02.077 clat (usec): min=277, max=1885, avg=505.30, stdev=211.83 00:19:02.077 lat (usec): min=295, max=1921, avg=526.71, stdev=215.13 00:19:02.077 clat percentiles (usec): 00:19:02.077 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 351], 00:19:02.077 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 537], 00:19:02.077 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 734], 95.00th=[ 955], 00:19:02.077 | 99.00th=[ 1205], 99.50th=[ 1352], 99.90th=[ 1680], 99.95th=[ 1713], 00:19:02.077 | 99.99th=[ 1893] 00:19:02.077 bw ( KiB/s): min=46784, max=95472, per=97.37%, avg=66873.14, stdev=18884.21, samples=7 00:19:02.077 iops : min= 688, max= 1404, avg=983.43, stdev=277.71, samples=7 00:19:02.077 lat (usec) : 250=0.01%, 500=62.53%, 750=29.60%, 1000=5.48% 00:19:02.077 lat (msec) : 2=2.38% 00:19:02.077 cpu : usr=98.98%, sys=0.10%, ctx=4, majf=0, minf=1181 00:19:02.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.077 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.077 00:19:02.077 Run status group 0 (all jobs): 00:19:02.077 READ: bw=66.6MiB/s (69.8MB/s), 66.6MiB/s-66.6MiB/s (69.8MB/s-69.8MB/s), io=255MiB (267MB), run=3821-3821msec 00:19:02.077 WRITE: bw=67.1MiB/s (70.3MB/s), 67.1MiB/s-67.1MiB/s (70.3MB/s-70.3MB/s), io=256MiB (269MB), run=3818-3818msec 00:19:02.649 ----------------------------------------------------- 00:19:02.649 Suppressions used: 00:19:02.649 count bytes template 00:19:02.649 1 5 /usr/src/fio/parse.c 00:19:02.649 1 8 libtcmalloc_minimal.so 00:19:02.649 1 904 libcrypto.so 00:19:02.649 ----------------------------------------------------- 00:19:02.649 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:02.649 05:05:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:02.909 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:02.909 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:02.909 fio-3.35 00:19:02.909 Starting 2 threads 00:19:29.448 00:19:29.448 first_half: (groupid=0, jobs=1): err= 0: pid=86878: Thu Nov 21 05:05:42 2024 00:19:29.448 read: IOPS=2908, BW=11.4MiB/s (11.9MB/s)(255MiB/22431msec) 00:19:29.448 slat (usec): min=2, max=113, avg= 4.27, stdev= 1.25 00:19:29.448 clat (usec): min=582, max=235076, avg=33070.10, stdev=17783.85 00:19:29.448 lat (usec): min=587, max=235081, avg=33074.37, stdev=17784.01 00:19:29.448 clat percentiles (msec): 00:19:29.448 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 30], 00:19:29.448 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:29.448 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 43], 00:19:29.448 | 99.00th=[ 138], 99.50th=[ 159], 99.90th=[ 199], 99.95th=[ 220], 00:19:29.449 | 99.99th=[ 232] 00:19:29.449 write: IOPS=3431, BW=13.4MiB/s (14.1MB/s)(256MiB/19099msec); 0 zone resets 00:19:29.449 slat (usec): min=3, max=532, avg= 6.22, stdev= 3.60 00:19:29.449 clat (usec): min=350, max=82982, avg=10868.27, stdev=18083.82 00:19:29.449 lat (usec): min=360, max=82990, avg=10874.49, stdev=18083.95 00:19:29.449 clat percentiles (usec): 00:19:29.449 | 1.00th=[ 652], 5.00th=[ 734], 10.00th=[ 824], 20.00th=[ 1254], 00:19:29.449 | 30.00th=[ 2966], 40.00th=[ 4359], 50.00th=[ 5014], 60.00th=[ 5473], 00:19:29.449 | 70.00th=[ 6259], 80.00th=[10552], 90.00th=[28967], 95.00th=[64226], 00:19:29.449 | 99.00th=[72877], 99.50th=[74974], 99.90th=[78119], 99.95th=[80217], 00:19:29.449 | 99.99th=[82314] 00:19:29.449 bw ( KiB/s): min= 984, max=40264, per=83.04%, avg=22795.13, stdev=12059.10, samples=23 00:19:29.449 iops : min= 246, max=10066, avg=5698.78, stdev=3014.78, samples=23 00:19:29.449 lat (usec) : 500=0.02%, 750=2.92%, 1000=4.62% 00:19:29.449 lat (msec) : 2=5.08%, 4=6.34%, 10=22.59%, 20=5.18%, 50=46.84% 00:19:29.449 lat (msec) : 100=5.47%, 250=0.95% 00:19:29.449 cpu : usr=99.28%, sys=0.10%, ctx=38, majf=0, minf=5577 00:19:29.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:29.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.449 issued rwts: total=65239,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.449 second_half: (groupid=0, jobs=1): err= 0: pid=86879: Thu Nov 21 05:05:42 2024 00:19:29.449 read: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(254MiB/22269msec) 00:19:29.449 slat (nsec): min=3087, max=21588, avg=4934.08, stdev=858.77 00:19:29.449 clat (usec): min=689, max=236697, avg=33830.65, stdev=16991.04 00:19:29.449 lat (usec): min=694, max=236702, avg=33835.59, stdev=16991.06 00:19:29.449 clat percentiles (msec): 00:19:29.449 | 1.00th=[ 5], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 31], 00:19:29.449 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:29.449 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 46], 00:19:29.449 | 99.00th=[ 134], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 174], 00:19:29.449 | 99.99th=[ 192] 00:19:29.449 write: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(256MiB/15816msec); 0 zone resets 00:19:29.449 slat (usec): min=3, max=114, avg= 6.45, stdev= 2.40 00:19:29.449 clat (usec): min=377, max=83171, avg=9846.48, stdev=17816.85 00:19:29.449 lat (usec): min=384, max=83178, avg=9852.93, stdev=17816.87 00:19:29.449 clat percentiles (usec): 00:19:29.449 | 1.00th=[ 676], 5.00th=[ 758], 10.00th=[ 832], 20.00th=[ 1090], 00:19:29.449 | 30.00th=[ 1418], 40.00th=[ 2835], 50.00th=[ 3949], 60.00th=[ 5014], 00:19:29.449 | 70.00th=[ 5997], 80.00th=[ 9896], 90.00th=[15008], 95.00th=[63701], 00:19:29.449 | 99.00th=[72877], 99.50th=[74974], 99.90th=[78119], 99.95th=[80217], 00:19:29.449 | 99.99th=[82314] 00:19:29.449 bw ( KiB/s): min= 1064, max=40480, per=100.00%, avg=27594.11, stdev=12900.03, samples=19 00:19:29.449 iops : min= 266, max=10120, avg=6898.53, stdev=3225.01, samples=19 00:19:29.449 lat (usec) : 500=0.02%, 750=2.28%, 1000=6.38% 00:19:29.449 lat (msec) : 2=8.67%, 4=8.47%, 10=15.26%, 20=5.70%, 50=46.53% 00:19:29.449 lat (msec) : 100=5.65%, 250=1.03% 00:19:29.449 cpu : usr=99.36%, sys=0.13%, ctx=40, majf=0, minf=5555 00:19:29.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:29.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.449 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:29.449 issued rwts: total=65144,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:29.449 00:19:29.449 Run status group 0 (all jobs): 00:19:29.449 READ: bw=22.7MiB/s (23.8MB/s), 11.4MiB/s-11.4MiB/s (11.9MB/s-12.0MB/s), io=509MiB (534MB), run=22269-22431msec 00:19:29.449 WRITE: bw=26.8MiB/s (28.1MB/s), 13.4MiB/s-16.2MiB/s (14.1MB/s-17.0MB/s), io=512MiB (537MB), run=15816-19099msec 00:19:29.449 ----------------------------------------------------- 00:19:29.449 Suppressions used: 00:19:29.449 count bytes template 00:19:29.449 2 10 /usr/src/fio/parse.c 00:19:29.449 2 192 /usr/src/fio/iolog.c 00:19:29.449 1 8 libtcmalloc_minimal.so 00:19:29.449 1 904 libcrypto.so 00:19:29.449 ----------------------------------------------------- 00:19:29.449 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:29.449 05:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:29.449 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:29.449 fio-3.35 00:19:29.449 Starting 1 thread 00:19:44.363 00:19:44.363 test: (groupid=0, jobs=1): err= 0: pid=87168: Thu Nov 21 05:05:58 2024 00:19:44.363 read: IOPS=8025, BW=31.4MiB/s (32.9MB/s)(255MiB/8124msec) 00:19:44.363 slat (nsec): min=2906, max=67064, avg=3314.48, stdev=718.82 00:19:44.363 clat (usec): min=791, max=30793, avg=15942.49, stdev=1820.22 00:19:44.363 lat (usec): min=797, max=30797, avg=15945.80, stdev=1820.23 00:19:44.363 clat percentiles (usec): 00:19:44.363 | 1.00th=[12911], 5.00th=[13829], 10.00th=[14353], 20.00th=[14746], 00:19:44.363 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:19:44.363 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17433], 95.00th=[20055], 00:19:44.363 | 99.00th=[22414], 99.50th=[23200], 99.90th=[28705], 99.95th=[30016], 00:19:44.363 | 99.99th=[30802] 00:19:44.363 write: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(256MiB/5564msec); 0 zone resets 00:19:44.363 slat (usec): min=3, max=1345, avg= 5.07, stdev= 7.68 00:19:44.363 clat (usec): min=548, max=59951, avg=10816.16, stdev=12492.37 00:19:44.363 lat (usec): min=553, max=59955, avg=10821.23, stdev=12492.33 00:19:44.363 clat percentiles (usec): 00:19:44.363 | 1.00th=[ 996], 5.00th=[ 1205], 10.00th=[ 1352], 20.00th=[ 1565], 00:19:44.363 | 30.00th=[ 1827], 40.00th=[ 2671], 50.00th=[ 6915], 60.00th=[ 8979], 00:19:44.363 | 70.00th=[10945], 80.00th=[13698], 90.00th=[36963], 95.00th=[39060], 00:19:44.363 | 99.00th=[42206], 99.50th=[43779], 99.90th=[47973], 99.95th=[50594], 00:19:44.363 | 99.99th=[55313] 00:19:44.363 bw ( KiB/s): min= 4104, max=67200, per=92.72%, avg=43684.08, stdev=14446.64, samples=12 00:19:44.363 iops : min= 1026, max=16800, avg=10921.00, stdev=3611.67, samples=12 00:19:44.363 lat (usec) : 750=0.03%, 1000=0.50% 00:19:44.363 lat (msec) : 2=16.52%, 4=3.77%, 10=12.10%, 20=56.56%, 50=10.51% 00:19:44.363 lat (msec) : 100=0.03% 00:19:44.363 cpu : usr=99.19%, sys=0.18%, ctx=18, majf=0, minf=5577 00:19:44.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:44.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.363 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.363 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.363 00:19:44.363 Run status group 0 (all jobs): 00:19:44.363 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=255MiB (267MB), run=8124-8124msec 00:19:44.363 WRITE: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=256MiB (268MB), run=5564-5564msec 00:19:44.363 ----------------------------------------------------- 00:19:44.363 Suppressions used: 00:19:44.363 count bytes template 00:19:44.363 1 5 /usr/src/fio/parse.c 00:19:44.363 2 192 /usr/src/fio/iolog.c 00:19:44.363 1 8 libtcmalloc_minimal.so 00:19:44.363 1 904 libcrypto.so 00:19:44.363 ----------------------------------------------------- 00:19:44.363 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.363 Remove shared memory files 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69516 /dev/shm/spdk_tgt_trace.pid85567 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:44.363 ************************************ 00:19:44.363 END TEST ftl_fio_basic 00:19:44.363 ************************************ 00:19:44.363 00:19:44.363 real 1m0.553s 00:19:44.363 user 2m17.481s 00:19:44.363 sys 0m2.621s 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.363 05:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:44.363 05:05:59 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:44.363 05:05:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:44.363 05:05:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.363 05:05:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:44.363 ************************************ 00:19:44.363 START TEST ftl_bdevperf 00:19:44.363 ************************************ 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:44.363 * Looking for test storage... 00:19:44.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.363 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.363 --rc genhtml_branch_coverage=1 00:19:44.363 --rc genhtml_function_coverage=1 00:19:44.363 --rc genhtml_legend=1 00:19:44.363 --rc geninfo_all_blocks=1 00:19:44.363 --rc geninfo_unexecuted_blocks=1 00:19:44.364 00:19:44.364 ' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.364 --rc genhtml_branch_coverage=1 00:19:44.364 --rc genhtml_function_coverage=1 00:19:44.364 --rc genhtml_legend=1 00:19:44.364 --rc geninfo_all_blocks=1 00:19:44.364 --rc geninfo_unexecuted_blocks=1 00:19:44.364 00:19:44.364 ' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.364 --rc genhtml_branch_coverage=1 00:19:44.364 --rc genhtml_function_coverage=1 00:19:44.364 --rc genhtml_legend=1 00:19:44.364 --rc geninfo_all_blocks=1 00:19:44.364 --rc geninfo_unexecuted_blocks=1 00:19:44.364 00:19:44.364 ' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.364 --rc genhtml_branch_coverage=1 00:19:44.364 --rc genhtml_function_coverage=1 00:19:44.364 --rc genhtml_legend=1 00:19:44.364 --rc geninfo_all_blocks=1 00:19:44.364 --rc geninfo_unexecuted_blocks=1 00:19:44.364 00:19:44.364 ' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=87396 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 87396 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 87396 ']' 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.364 05:05:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:44.364 [2024-11-21 05:05:59.935049] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:19:44.364 [2024-11-21 05:05:59.935471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87396 ] 00:19:44.364 [2024-11-21 05:06:00.096336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.364 [2024-11-21 05:06:00.119936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:44.364 05:06:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:44.364 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:44.625 { 00:19:44.625 "name": "nvme0n1", 00:19:44.625 "aliases": [ 00:19:44.625 "29a1edd4-6d7e-488e-b782-25e76b3d9a21" 00:19:44.625 ], 00:19:44.625 "product_name": "NVMe disk", 00:19:44.625 "block_size": 4096, 00:19:44.625 "num_blocks": 1310720, 00:19:44.625 "uuid": "29a1edd4-6d7e-488e-b782-25e76b3d9a21", 00:19:44.625 "numa_id": -1, 00:19:44.625 "assigned_rate_limits": { 00:19:44.625 "rw_ios_per_sec": 0, 00:19:44.625 "rw_mbytes_per_sec": 0, 00:19:44.625 "r_mbytes_per_sec": 0, 00:19:44.625 "w_mbytes_per_sec": 0 00:19:44.625 }, 00:19:44.625 "claimed": true, 00:19:44.625 "claim_type": "read_many_write_one", 00:19:44.625 "zoned": false, 00:19:44.625 "supported_io_types": { 00:19:44.625 "read": true, 00:19:44.625 "write": true, 00:19:44.625 "unmap": true, 00:19:44.625 "flush": true, 00:19:44.625 "reset": true, 00:19:44.625 "nvme_admin": true, 00:19:44.625 "nvme_io": true, 00:19:44.625 "nvme_io_md": false, 00:19:44.625 "write_zeroes": true, 00:19:44.625 "zcopy": false, 00:19:44.625 "get_zone_info": false, 00:19:44.625 "zone_management": false, 00:19:44.625 "zone_append": false, 00:19:44.625 "compare": true, 00:19:44.625 "compare_and_write": false, 00:19:44.625 "abort": true, 00:19:44.625 "seek_hole": false, 00:19:44.625 "seek_data": false, 00:19:44.625 "copy": true, 00:19:44.625 "nvme_iov_md": false 00:19:44.625 }, 00:19:44.625 "driver_specific": { 00:19:44.625 "nvme": [ 00:19:44.625 { 00:19:44.625 "pci_address": "0000:00:11.0", 00:19:44.625 "trid": { 00:19:44.625 "trtype": "PCIe", 00:19:44.625 "traddr": "0000:00:11.0" 00:19:44.625 }, 00:19:44.625 "ctrlr_data": { 00:19:44.625 "cntlid": 0, 00:19:44.625 "vendor_id": "0x1b36", 00:19:44.625 "model_number": "QEMU NVMe Ctrl", 00:19:44.625 "serial_number": "12341", 00:19:44.625 "firmware_revision": "8.0.0", 00:19:44.625 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:44.625 "oacs": { 00:19:44.625 "security": 0, 00:19:44.625 "format": 1, 00:19:44.625 "firmware": 0, 00:19:44.625 "ns_manage": 1 00:19:44.625 }, 00:19:44.625 "multi_ctrlr": false, 00:19:44.625 "ana_reporting": false 00:19:44.625 }, 00:19:44.625 "vs": { 00:19:44.625 "nvme_version": "1.4" 00:19:44.625 }, 00:19:44.625 "ns_data": { 00:19:44.625 "id": 1, 00:19:44.625 "can_share": false 00:19:44.625 } 00:19:44.625 } 00:19:44.625 ], 00:19:44.625 "mp_policy": "active_passive" 00:19:44.625 } 00:19:44.625 } 00:19:44.625 ]' 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:44.625 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:44.886 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=53055908-0e5c-4f82-b988-f014e5c97d90 00:19:44.886 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:44.886 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53055908-0e5c-4f82-b988-f014e5c97d90 00:19:45.146 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:45.407 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3b50b9a7-4a38-4ee4-9f25-f59840afb90b 00:19:45.407 05:06:01 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3b50b9a7-4a38-4ee4-9f25-f59840afb90b 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:45.668 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:45.668 { 00:19:45.668 "name": "ca5db010-8ace-4569-8635-c49cedfc5ed7", 00:19:45.668 "aliases": [ 00:19:45.668 "lvs/nvme0n1p0" 00:19:45.668 ], 00:19:45.668 "product_name": "Logical Volume", 00:19:45.668 "block_size": 4096, 00:19:45.668 "num_blocks": 26476544, 00:19:45.668 "uuid": "ca5db010-8ace-4569-8635-c49cedfc5ed7", 00:19:45.668 "assigned_rate_limits": { 00:19:45.668 "rw_ios_per_sec": 0, 00:19:45.668 "rw_mbytes_per_sec": 0, 00:19:45.668 "r_mbytes_per_sec": 0, 00:19:45.668 "w_mbytes_per_sec": 0 00:19:45.668 }, 00:19:45.668 "claimed": false, 00:19:45.668 "zoned": false, 00:19:45.668 "supported_io_types": { 00:19:45.668 "read": true, 00:19:45.668 "write": true, 00:19:45.668 "unmap": true, 00:19:45.668 "flush": false, 00:19:45.668 "reset": true, 00:19:45.668 "nvme_admin": false, 00:19:45.668 "nvme_io": false, 00:19:45.668 "nvme_io_md": false, 00:19:45.668 "write_zeroes": true, 00:19:45.668 "zcopy": false, 00:19:45.668 "get_zone_info": false, 00:19:45.668 "zone_management": false, 00:19:45.668 "zone_append": false, 00:19:45.668 "compare": false, 00:19:45.668 "compare_and_write": false, 00:19:45.668 "abort": false, 00:19:45.668 "seek_hole": true, 00:19:45.668 "seek_data": true, 00:19:45.668 "copy": false, 00:19:45.668 "nvme_iov_md": false 00:19:45.668 }, 00:19:45.668 "driver_specific": { 00:19:45.668 "lvol": { 00:19:45.669 "lvol_store_uuid": "3b50b9a7-4a38-4ee4-9f25-f59840afb90b", 00:19:45.669 "base_bdev": "nvme0n1", 00:19:45.669 "thin_provision": true, 00:19:45.669 "num_allocated_clusters": 0, 00:19:45.669 "snapshot": false, 00:19:45.669 "clone": false, 00:19:45.669 "esnap_clone": false 00:19:45.669 } 00:19:45.669 } 00:19:45.669 } 00:19:45.669 ]' 00:19:45.669 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.930 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:46.191 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:46.192 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.192 { 00:19:46.192 "name": "ca5db010-8ace-4569-8635-c49cedfc5ed7", 00:19:46.192 "aliases": [ 00:19:46.192 "lvs/nvme0n1p0" 00:19:46.192 ], 00:19:46.192 "product_name": "Logical Volume", 00:19:46.192 "block_size": 4096, 00:19:46.192 "num_blocks": 26476544, 00:19:46.192 "uuid": "ca5db010-8ace-4569-8635-c49cedfc5ed7", 00:19:46.192 "assigned_rate_limits": { 00:19:46.192 "rw_ios_per_sec": 0, 00:19:46.192 "rw_mbytes_per_sec": 0, 00:19:46.192 "r_mbytes_per_sec": 0, 00:19:46.192 "w_mbytes_per_sec": 0 00:19:46.192 }, 00:19:46.192 "claimed": false, 00:19:46.192 "zoned": false, 00:19:46.192 "supported_io_types": { 00:19:46.192 "read": true, 00:19:46.192 "write": true, 00:19:46.192 "unmap": true, 00:19:46.192 "flush": false, 00:19:46.192 "reset": true, 00:19:46.192 "nvme_admin": false, 00:19:46.192 "nvme_io": false, 00:19:46.192 "nvme_io_md": false, 00:19:46.192 "write_zeroes": true, 00:19:46.192 "zcopy": false, 00:19:46.192 "get_zone_info": false, 00:19:46.192 "zone_management": false, 00:19:46.192 "zone_append": false, 00:19:46.192 "compare": false, 00:19:46.192 "compare_and_write": false, 00:19:46.192 "abort": false, 00:19:46.192 "seek_hole": true, 00:19:46.192 "seek_data": true, 00:19:46.192 "copy": false, 00:19:46.192 "nvme_iov_md": false 00:19:46.192 }, 00:19:46.192 "driver_specific": { 00:19:46.192 "lvol": { 00:19:46.192 "lvol_store_uuid": "3b50b9a7-4a38-4ee4-9f25-f59840afb90b", 00:19:46.192 "base_bdev": "nvme0n1", 00:19:46.192 "thin_provision": true, 00:19:46.192 "num_allocated_clusters": 0, 00:19:46.192 "snapshot": false, 00:19:46.192 "clone": false, 00:19:46.192 "esnap_clone": false 00:19:46.192 } 00:19:46.192 } 00:19:46.192 } 00:19:46.192 ]' 00:19:46.192 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.453 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.453 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.453 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.453 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.453 05:06:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.453 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:46.453 05:06:02 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.453 05:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:46.453 05:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:46.453 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:46.453 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:46.453 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:46.453 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:46.453 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca5db010-8ace-4569-8635-c49cedfc5ed7 00:19:46.714 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:46.714 { 00:19:46.714 "name": "ca5db010-8ace-4569-8635-c49cedfc5ed7", 00:19:46.714 "aliases": [ 00:19:46.714 "lvs/nvme0n1p0" 00:19:46.714 ], 00:19:46.714 "product_name": "Logical Volume", 00:19:46.714 "block_size": 4096, 00:19:46.714 "num_blocks": 26476544, 00:19:46.714 "uuid": "ca5db010-8ace-4569-8635-c49cedfc5ed7", 00:19:46.714 "assigned_rate_limits": { 00:19:46.714 "rw_ios_per_sec": 0, 00:19:46.715 "rw_mbytes_per_sec": 0, 00:19:46.715 "r_mbytes_per_sec": 0, 00:19:46.715 "w_mbytes_per_sec": 0 00:19:46.715 }, 00:19:46.715 "claimed": false, 00:19:46.715 "zoned": false, 00:19:46.715 "supported_io_types": { 00:19:46.715 "read": true, 00:19:46.715 "write": true, 00:19:46.715 "unmap": true, 00:19:46.715 "flush": false, 00:19:46.715 "reset": true, 00:19:46.715 "nvme_admin": false, 00:19:46.715 "nvme_io": false, 00:19:46.715 "nvme_io_md": false, 00:19:46.715 "write_zeroes": true, 00:19:46.715 "zcopy": false, 00:19:46.715 "get_zone_info": false, 00:19:46.715 "zone_management": false, 00:19:46.715 "zone_append": false, 00:19:46.715 "compare": false, 00:19:46.715 "compare_and_write": false, 00:19:46.715 "abort": false, 00:19:46.715 "seek_hole": true, 00:19:46.715 "seek_data": true, 00:19:46.715 "copy": false, 00:19:46.715 "nvme_iov_md": false 00:19:46.715 }, 00:19:46.715 "driver_specific": { 00:19:46.715 "lvol": { 00:19:46.715 "lvol_store_uuid": "3b50b9a7-4a38-4ee4-9f25-f59840afb90b", 00:19:46.715 "base_bdev": "nvme0n1", 00:19:46.715 "thin_provision": true, 00:19:46.715 "num_allocated_clusters": 0, 00:19:46.715 "snapshot": false, 00:19:46.715 "clone": false, 00:19:46.715 "esnap_clone": false 00:19:46.715 } 00:19:46.715 } 00:19:46.715 } 00:19:46.715 ]' 00:19:46.715 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:46.715 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:46.715 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:46.977 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:46.977 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:46.977 05:06:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:46.977 05:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:46.977 05:06:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ca5db010-8ace-4569-8635-c49cedfc5ed7 -c nvc0n1p0 --l2p_dram_limit 20 00:19:46.977 [2024-11-21 05:06:03.637050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.977 [2024-11-21 05:06:03.637191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.977 [2024-11-21 05:06:03.637210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:46.977 [2024-11-21 05:06:03.637218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.977 [2024-11-21 05:06:03.637260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.977 [2024-11-21 05:06:03.637268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.977 [2024-11-21 05:06:03.637277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:46.977 [2024-11-21 05:06:03.637283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.977 [2024-11-21 05:06:03.637299] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.977 [2024-11-21 05:06:03.637486] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.977 [2024-11-21 05:06:03.637501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.977 [2024-11-21 05:06:03.637507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.977 [2024-11-21 05:06:03.637517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:19:46.977 [2024-11-21 05:06:03.637522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.977 [2024-11-21 05:06:03.637604] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 960aff3e-5f36-4172-8185-150b6fe5d780 00:19:46.977 [2024-11-21 05:06:03.638538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.977 [2024-11-21 05:06:03.638566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:46.977 [2024-11-21 05:06:03.638574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:46.977 [2024-11-21 05:06:03.638581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.977 [2024-11-21 05:06:03.643353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.977 [2024-11-21 05:06:03.643382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.977 [2024-11-21 05:06:03.643390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.732 ms 00:19:46.977 [2024-11-21 05:06:03.643399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.977 [2024-11-21 05:06:03.643454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.978 [2024-11-21 05:06:03.643462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.978 [2024-11-21 05:06:03.643470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:46.978 [2024-11-21 05:06:03.643477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.978 [2024-11-21 05:06:03.643508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.978 [2024-11-21 05:06:03.643517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.978 [2024-11-21 05:06:03.643523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:46.978 [2024-11-21 05:06:03.643531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.978 [2024-11-21 05:06:03.643544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.978 [2024-11-21 05:06:03.644823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.978 [2024-11-21 05:06:03.644850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.978 [2024-11-21 05:06:03.644861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:19:46.978 [2024-11-21 05:06:03.644866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.978 [2024-11-21 05:06:03.644890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.978 [2024-11-21 05:06:03.644896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.978 [2024-11-21 05:06:03.644905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:46.978 [2024-11-21 05:06:03.644910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.978 [2024-11-21 05:06:03.644922] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:46.978 [2024-11-21 05:06:03.645026] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.978 [2024-11-21 05:06:03.645037] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.978 [2024-11-21 05:06:03.645045] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.978 [2024-11-21 05:06:03.645053] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645060] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.978 [2024-11-21 05:06:03.645073] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.978 [2024-11-21 05:06:03.645083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.978 [2024-11-21 05:06:03.645090] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.978 [2024-11-21 05:06:03.645097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.978 [2024-11-21 05:06:03.645103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.978 [2024-11-21 05:06:03.645113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:19:46.978 [2024-11-21 05:06:03.645140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.978 [2024-11-21 05:06:03.645206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.978 [2024-11-21 05:06:03.645213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.978 [2024-11-21 05:06:03.645223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:46.978 [2024-11-21 05:06:03.645229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.978 [2024-11-21 05:06:03.645299] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.978 [2024-11-21 05:06:03.645309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.978 [2024-11-21 05:06:03.645317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.978 [2024-11-21 05:06:03.645340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.978 [2024-11-21 05:06:03.645358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.978 [2024-11-21 05:06:03.645371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.978 [2024-11-21 05:06:03.645377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.978 [2024-11-21 05:06:03.645385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.978 [2024-11-21 05:06:03.645390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.978 [2024-11-21 05:06:03.645397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.978 [2024-11-21 05:06:03.645404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.978 [2024-11-21 05:06:03.645417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.978 [2024-11-21 05:06:03.645435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.978 [2024-11-21 05:06:03.645452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.978 [2024-11-21 05:06:03.645469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.978 [2024-11-21 05:06:03.645490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.978 [2024-11-21 05:06:03.645511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.978 [2024-11-21 05:06:03.645524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.978 [2024-11-21 05:06:03.645530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.978 [2024-11-21 05:06:03.645537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.978 [2024-11-21 05:06:03.645543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.978 [2024-11-21 05:06:03.645550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.978 [2024-11-21 05:06:03.645556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.978 [2024-11-21 05:06:03.645570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.978 [2024-11-21 05:06:03.645576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645582] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.978 [2024-11-21 05:06:03.645595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.978 [2024-11-21 05:06:03.645602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.978 [2024-11-21 05:06:03.645627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.978 [2024-11-21 05:06:03.645634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.978 [2024-11-21 05:06:03.645640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.978 [2024-11-21 05:06:03.645648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.978 [2024-11-21 05:06:03.645654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.978 [2024-11-21 05:06:03.645661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.978 [2024-11-21 05:06:03.645670] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.978 [2024-11-21 05:06:03.645681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.979 [2024-11-21 05:06:03.645688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.979 [2024-11-21 05:06:03.645697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.979 [2024-11-21 05:06:03.645704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.979 [2024-11-21 05:06:03.645713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.979 [2024-11-21 05:06:03.645719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.979 [2024-11-21 05:06:03.645728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.979 [2024-11-21 05:06:03.645735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.979 [2024-11-21 05:06:03.645749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.979 [2024-11-21 05:06:03.645755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.979 [2024-11-21 05:06:03.645762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.979 [2024-11-21 05:06:03.645769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.979 [2024-11-21 05:06:03.645776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.979 [2024-11-21 05:06:03.645783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.979 [2024-11-21 05:06:03.645790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.979 [2024-11-21 05:06:03.645797] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.979 [2024-11-21 05:06:03.645806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.979 [2024-11-21 05:06:03.645813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.979 [2024-11-21 05:06:03.645821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.979 [2024-11-21 05:06:03.645828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.979 [2024-11-21 05:06:03.645836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.979 [2024-11-21 05:06:03.645843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.979 [2024-11-21 05:06:03.645851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.979 [2024-11-21 05:06:03.645858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:19:46.979 [2024-11-21 05:06:03.645866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.979 [2024-11-21 05:06:03.645889] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:46.979 [2024-11-21 05:06:03.645897] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:51.185 [2024-11-21 05:06:07.543250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.543335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:51.185 [2024-11-21 05:06:07.543356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3897.341 ms 00:19:51.185 [2024-11-21 05:06:07.543368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.557459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.557527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.185 [2024-11-21 05:06:07.557541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.974 ms 00:19:51.185 [2024-11-21 05:06:07.557554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.557684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.557697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:51.185 [2024-11-21 05:06:07.557710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:51.185 [2024-11-21 05:06:07.557729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.577673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.577731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.185 [2024-11-21 05:06:07.577746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.889 ms 00:19:51.185 [2024-11-21 05:06:07.577758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.577794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.577809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.185 [2024-11-21 05:06:07.577818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.185 [2024-11-21 05:06:07.577828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.578350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.578382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.185 [2024-11-21 05:06:07.578396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:19:51.185 [2024-11-21 05:06:07.578413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.578543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.578558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.185 [2024-11-21 05:06:07.578576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:51.185 [2024-11-21 05:06:07.578593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.586324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.586379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.185 [2024-11-21 05:06:07.586390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.683 ms 00:19:51.185 [2024-11-21 05:06:07.586407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.596673] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:51.185 [2024-11-21 05:06:07.604855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.604900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:51.185 [2024-11-21 05:06:07.604917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.379 ms 00:19:51.185 [2024-11-21 05:06:07.604926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.704965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.705024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:51.185 [2024-11-21 05:06:07.705049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.002 ms 00:19:51.185 [2024-11-21 05:06:07.705061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.705279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.705291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:51.185 [2024-11-21 05:06:07.705304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:19:51.185 [2024-11-21 05:06:07.705314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.711534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.711587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:51.185 [2024-11-21 05:06:07.711828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.163 ms 00:19:51.185 [2024-11-21 05:06:07.711849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.717867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.718056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:51.185 [2024-11-21 05:06:07.718297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.683 ms 00:19:51.185 [2024-11-21 05:06:07.718328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.718904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.718954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:51.185 [2024-11-21 05:06:07.718983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:19:51.185 [2024-11-21 05:06:07.719629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.770596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.770670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:51.185 [2024-11-21 05:06:07.770695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.873 ms 00:19:51.185 [2024-11-21 05:06:07.770704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.778480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.778534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:51.185 [2024-11-21 05:06:07.778550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.687 ms 00:19:51.185 [2024-11-21 05:06:07.778560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.784873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.784926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:51.185 [2024-11-21 05:06:07.784939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.236 ms 00:19:51.185 [2024-11-21 05:06:07.784947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.791514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.791569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:51.185 [2024-11-21 05:06:07.791586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.512 ms 00:19:51.185 [2024-11-21 05:06:07.791594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.791669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.791684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:51.185 [2024-11-21 05:06:07.791696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:51.185 [2024-11-21 05:06:07.791711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.791848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.185 [2024-11-21 05:06:07.791861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:51.185 [2024-11-21 05:06:07.791873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:51.185 [2024-11-21 05:06:07.791882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.185 [2024-11-21 05:06:07.793011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4155.455 ms, result 0 00:19:51.185 { 00:19:51.185 "name": "ftl0", 00:19:51.185 "uuid": "960aff3e-5f36-4172-8185-150b6fe5d780" 00:19:51.185 } 00:19:51.185 05:06:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:51.185 05:06:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:51.185 05:06:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:51.447 05:06:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:51.447 [2024-11-21 05:06:08.136544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:51.447 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:51.447 Zero copy mechanism will not be used. 00:19:51.447 Running I/O for 4 seconds... 00:19:53.778 661.00 IOPS, 43.89 MiB/s [2024-11-21T05:06:11.457Z] 793.50 IOPS, 52.69 MiB/s [2024-11-21T05:06:12.399Z] 787.67 IOPS, 52.31 MiB/s [2024-11-21T05:06:12.399Z] 790.00 IOPS, 52.46 MiB/s 00:19:55.665 Latency(us) 00:19:55.665 [2024-11-21T05:06:12.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.665 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:55.665 ftl0 : 4.00 789.99 52.46 0.00 0.00 1339.16 187.47 2545.82 00:19:55.665 [2024-11-21T05:06:12.399Z] =================================================================================================================== 00:19:55.665 [2024-11-21T05:06:12.399Z] Total : 789.99 52.46 0.00 0.00 1339.16 187.47 2545.82 00:19:55.665 [2024-11-21 05:06:12.143692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:55.665 { 00:19:55.665 "results": [ 00:19:55.665 { 00:19:55.665 "job": "ftl0", 00:19:55.665 "core_mask": "0x1", 00:19:55.665 "workload": "randwrite", 00:19:55.665 "status": "finished", 00:19:55.665 "queue_depth": 1, 00:19:55.665 "io_size": 69632, 00:19:55.665 "runtime": 4.0013, 00:19:55.665 "iops": 789.9932521930373, 00:19:55.665 "mibps": 52.46048940344388, 00:19:55.665 "io_failed": 0, 00:19:55.665 "io_timeout": 0, 00:19:55.665 "avg_latency_us": 1339.156041174896, 00:19:55.665 "min_latency_us": 187.47076923076924, 00:19:55.665 "max_latency_us": 2545.8215384615382 00:19:55.665 } 00:19:55.665 ], 00:19:55.665 "core_count": 1 00:19:55.665 } 00:19:55.665 05:06:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:55.665 [2024-11-21 05:06:12.249942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:55.665 Running I/O for 4 seconds... 00:19:57.547 7208.00 IOPS, 28.16 MiB/s [2024-11-21T05:06:15.669Z] 6161.50 IOPS, 24.07 MiB/s [2024-11-21T05:06:16.615Z] 5667.33 IOPS, 22.14 MiB/s [2024-11-21T05:06:16.615Z] 5410.75 IOPS, 21.14 MiB/s 00:19:59.881 Latency(us) 00:19:59.881 [2024-11-21T05:06:16.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.881 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:59.881 ftl0 : 4.03 5400.78 21.10 0.00 0.00 23616.90 373.37 47589.22 00:19:59.881 [2024-11-21T05:06:16.615Z] =================================================================================================================== 00:19:59.881 [2024-11-21T05:06:16.615Z] Total : 5400.78 21.10 0.00 0.00 23616.90 0.00 47589.22 00:19:59.881 [2024-11-21 05:06:16.287392] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:59.881 { 00:19:59.881 "results": [ 00:19:59.881 { 00:19:59.881 "job": "ftl0", 00:19:59.881 "core_mask": "0x1", 00:19:59.881 "workload": "randwrite", 00:19:59.881 "status": "finished", 00:19:59.881 "queue_depth": 128, 00:19:59.881 "io_size": 4096, 00:19:59.881 "runtime": 4.031083, 00:19:59.881 "iops": 5400.781874250667, 00:19:59.881 "mibps": 21.096804196291668, 00:19:59.881 "io_failed": 0, 00:19:59.881 "io_timeout": 0, 00:19:59.881 "avg_latency_us": 23616.904854799788, 00:19:59.881 "min_latency_us": 373.36615384615385, 00:19:59.881 "max_latency_us": 47589.21846153846 00:19:59.881 } 00:19:59.881 ], 00:19:59.881 "core_count": 1 00:19:59.881 } 00:19:59.881 05:06:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:59.881 [2024-11-21 05:06:16.408076] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:59.881 Running I/O for 4 seconds... 00:20:01.769 4360.00 IOPS, 17.03 MiB/s [2024-11-21T05:06:19.445Z] 4392.50 IOPS, 17.16 MiB/s [2024-11-21T05:06:20.831Z] 4403.33 IOPS, 17.20 MiB/s [2024-11-21T05:06:20.831Z] 4429.25 IOPS, 17.30 MiB/s 00:20:04.097 Latency(us) 00:20:04.097 [2024-11-21T05:06:20.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.097 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:04.097 Verification LBA range: start 0x0 length 0x1400000 00:20:04.097 ftl0 : 4.01 4443.96 17.36 0.00 0.00 28718.53 376.52 38111.70 00:20:04.097 [2024-11-21T05:06:20.831Z] =================================================================================================================== 00:20:04.097 [2024-11-21T05:06:20.831Z] Total : 4443.96 17.36 0.00 0.00 28718.53 0.00 38111.70 00:20:04.097 [2024-11-21 05:06:20.435264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:04.097 { 00:20:04.097 "results": [ 00:20:04.097 { 00:20:04.097 "job": "ftl0", 00:20:04.097 "core_mask": "0x1", 00:20:04.097 "workload": "verify", 00:20:04.097 "status": "finished", 00:20:04.097 "verify_range": { 00:20:04.097 "start": 0, 00:20:04.097 "length": 20971520 00:20:04.097 }, 00:20:04.097 "queue_depth": 128, 00:20:04.097 "io_size": 4096, 00:20:04.097 "runtime": 4.012633, 00:20:04.097 "iops": 4443.964848018745, 00:20:04.097 "mibps": 17.359237687573223, 00:20:04.097 "io_failed": 0, 00:20:04.097 "io_timeout": 0, 00:20:04.097 "avg_latency_us": 28718.531756910656, 00:20:04.097 "min_latency_us": 376.5169230769231, 00:20:04.097 "max_latency_us": 38111.70461538462 00:20:04.097 } 00:20:04.097 ], 00:20:04.097 "core_count": 1 00:20:04.097 } 00:20:04.097 05:06:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:04.097 [2024-11-21 05:06:20.659670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.097 [2024-11-21 05:06:20.659912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:04.097 [2024-11-21 05:06:20.659940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.097 [2024-11-21 05:06:20.659951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.097 [2024-11-21 05:06:20.659992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:04.097 [2024-11-21 05:06:20.660715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.097 [2024-11-21 05:06:20.660752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:04.097 [2024-11-21 05:06:20.660765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:20:04.097 [2024-11-21 05:06:20.660783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.097 [2024-11-21 05:06:20.663685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.097 [2024-11-21 05:06:20.663735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:04.097 [2024-11-21 05:06:20.663746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:20:04.097 [2024-11-21 05:06:20.663761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.888872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.889100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:04.361 [2024-11-21 05:06:20.889152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 225.090 ms 00:20:04.361 [2024-11-21 05:06:20.889165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.895600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.895666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:04.361 [2024-11-21 05:06:20.895681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:20:04.361 [2024-11-21 05:06:20.895697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.898534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.898595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:04.361 [2024-11-21 05:06:20.898626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:20:04.361 [2024-11-21 05:06:20.898637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.905709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.905768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:04.361 [2024-11-21 05:06:20.905781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.025 ms 00:20:04.361 [2024-11-21 05:06:20.905796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.905938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.905954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:04.361 [2024-11-21 05:06:20.905964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:04.361 [2024-11-21 05:06:20.905975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.909488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.909545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:04.361 [2024-11-21 05:06:20.909555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:20:04.361 [2024-11-21 05:06:20.909565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.912696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.912748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:04.361 [2024-11-21 05:06:20.912758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:04.361 [2024-11-21 05:06:20.912768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.914954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.915033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:04.361 [2024-11-21 05:06:20.915043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:20:04.361 [2024-11-21 05:06:20.915056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.917355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.361 [2024-11-21 05:06:20.917413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:04.361 [2024-11-21 05:06:20.917423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.231 ms 00:20:04.361 [2024-11-21 05:06:20.917433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.361 [2024-11-21 05:06:20.917476] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:04.361 [2024-11-21 05:06:20.917502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:04.361 [2024-11-21 05:06:20.917690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.917997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:04.362 [2024-11-21 05:06:20.918461] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:04.362 [2024-11-21 05:06:20.918469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 960aff3e-5f36-4172-8185-150b6fe5d780 00:20:04.362 [2024-11-21 05:06:20.918478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:04.362 [2024-11-21 05:06:20.918487] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:04.362 [2024-11-21 05:06:20.918500] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:04.362 [2024-11-21 05:06:20.918508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:04.362 [2024-11-21 05:06:20.918521] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:04.362 [2024-11-21 05:06:20.918528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:04.363 [2024-11-21 05:06:20.918538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:04.363 [2024-11-21 05:06:20.918544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:04.363 [2024-11-21 05:06:20.918557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:04.363 [2024-11-21 05:06:20.918566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.363 [2024-11-21 05:06:20.918580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:04.363 [2024-11-21 05:06:20.918591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:20:04.363 [2024-11-21 05:06:20.918602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.921143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.363 [2024-11-21 05:06:20.921184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:04.363 [2024-11-21 05:06:20.921195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:20:04.363 [2024-11-21 05:06:20.921206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.921328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.363 [2024-11-21 05:06:20.921343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:04.363 [2024-11-21 05:06:20.921353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:04.363 [2024-11-21 05:06:20.921365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.929151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.929204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.363 [2024-11-21 05:06:20.929215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.929227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.929290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.929304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.363 [2024-11-21 05:06:20.929312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.929322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.929400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.929414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.363 [2024-11-21 05:06:20.929422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.929433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.929451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.929462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.363 [2024-11-21 05:06:20.929470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.929485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.942827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.942885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.363 [2024-11-21 05:06:20.942897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.942908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.954164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.363 [2024-11-21 05:06:20.954180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.954191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.954286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.363 [2024-11-21 05:06:20.954295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.954306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.954363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.363 [2024-11-21 05:06:20.954372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.954387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.954476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.363 [2024-11-21 05:06:20.954484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.954501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.954553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:04.363 [2024-11-21 05:06:20.954561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.954572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.954652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.363 [2024-11-21 05:06:20.954663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.954673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.363 [2024-11-21 05:06:20.954738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.363 [2024-11-21 05:06:20.954762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.363 [2024-11-21 05:06:20.954780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.363 [2024-11-21 05:06:20.954937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 295.217 ms, result 0 00:20:04.363 true 00:20:04.363 05:06:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 87396 00:20:04.363 05:06:20 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 87396 ']' 00:20:04.363 05:06:20 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 87396 00:20:04.363 05:06:20 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:04.363 05:06:20 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.363 05:06:20 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87396 00:20:04.363 05:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.363 05:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.363 killing process with pid 87396 00:20:04.363 Received shutdown signal, test time was about 4.000000 seconds 00:20:04.363 00:20:04.363 Latency(us) 00:20:04.363 [2024-11-21T05:06:21.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.363 [2024-11-21T05:06:21.097Z] =================================================================================================================== 00:20:04.363 [2024-11-21T05:06:21.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.363 05:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87396' 00:20:04.363 05:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 87396 00:20:04.363 05:06:21 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 87396 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:08.743 Remove shared memory files 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:08.743 ************************************ 00:20:08.743 END TEST ftl_bdevperf 00:20:08.743 00:20:08.743 real 0m25.650s 00:20:08.743 user 0m28.283s 00:20:08.743 sys 0m0.945s 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.743 05:06:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:08.743 ************************************ 00:20:08.743 05:06:25 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:08.743 05:06:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:08.743 05:06:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.743 05:06:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:08.743 ************************************ 00:20:08.743 START TEST ftl_trim 00:20:08.743 ************************************ 00:20:08.743 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:08.743 * Looking for test storage... 00:20:09.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:09.004 05:06:25 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.004 --rc genhtml_branch_coverage=1 00:20:09.004 --rc genhtml_function_coverage=1 00:20:09.004 --rc genhtml_legend=1 00:20:09.004 --rc geninfo_all_blocks=1 00:20:09.004 --rc geninfo_unexecuted_blocks=1 00:20:09.004 00:20:09.004 ' 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.004 --rc genhtml_branch_coverage=1 00:20:09.004 --rc genhtml_function_coverage=1 00:20:09.004 --rc genhtml_legend=1 00:20:09.004 --rc geninfo_all_blocks=1 00:20:09.004 --rc geninfo_unexecuted_blocks=1 00:20:09.004 00:20:09.004 ' 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.004 --rc genhtml_branch_coverage=1 00:20:09.004 --rc genhtml_function_coverage=1 00:20:09.004 --rc genhtml_legend=1 00:20:09.004 --rc geninfo_all_blocks=1 00:20:09.004 --rc geninfo_unexecuted_blocks=1 00:20:09.004 00:20:09.004 ' 00:20:09.004 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:09.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.004 --rc genhtml_branch_coverage=1 00:20:09.004 --rc genhtml_function_coverage=1 00:20:09.004 --rc genhtml_legend=1 00:20:09.004 --rc geninfo_all_blocks=1 00:20:09.004 --rc geninfo_unexecuted_blocks=1 00:20:09.004 00:20:09.004 ' 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:09.004 05:06:25 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87748 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87748 00:20:09.005 05:06:25 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:09.005 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87748 ']' 00:20:09.005 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.005 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.005 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.005 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.005 05:06:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:09.005 [2024-11-21 05:06:25.703275] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:20:09.005 [2024-11-21 05:06:25.703452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87748 ] 00:20:09.265 [2024-11-21 05:06:25.869046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:09.265 [2024-11-21 05:06:25.900223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.265 [2024-11-21 05:06:25.900674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.265 [2024-11-21 05:06:25.900716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.838 05:06:26 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.838 05:06:26 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:09.838 05:06:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:09.838 05:06:26 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:09.838 05:06:26 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:09.838 05:06:26 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:09.838 05:06:26 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:09.838 05:06:26 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:10.410 05:06:26 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:10.410 05:06:26 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:10.410 05:06:26 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:10.410 05:06:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:10.410 05:06:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:10.410 05:06:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:10.410 05:06:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:10.410 05:06:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:10.410 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:10.410 { 00:20:10.410 "name": "nvme0n1", 00:20:10.410 "aliases": [ 00:20:10.410 "66386c2c-7b87-45d6-bc35-406e9a6a2762" 00:20:10.410 ], 00:20:10.410 "product_name": "NVMe disk", 00:20:10.410 "block_size": 4096, 00:20:10.410 "num_blocks": 1310720, 00:20:10.410 "uuid": "66386c2c-7b87-45d6-bc35-406e9a6a2762", 00:20:10.410 "numa_id": -1, 00:20:10.410 "assigned_rate_limits": { 00:20:10.410 "rw_ios_per_sec": 0, 00:20:10.410 "rw_mbytes_per_sec": 0, 00:20:10.410 "r_mbytes_per_sec": 0, 00:20:10.410 "w_mbytes_per_sec": 0 00:20:10.410 }, 00:20:10.410 "claimed": true, 00:20:10.410 "claim_type": "read_many_write_one", 00:20:10.410 "zoned": false, 00:20:10.410 "supported_io_types": { 00:20:10.410 "read": true, 00:20:10.410 "write": true, 00:20:10.410 "unmap": true, 00:20:10.410 "flush": true, 00:20:10.410 "reset": true, 00:20:10.410 "nvme_admin": true, 00:20:10.410 "nvme_io": true, 00:20:10.410 "nvme_io_md": false, 00:20:10.410 "write_zeroes": true, 00:20:10.410 "zcopy": false, 00:20:10.410 "get_zone_info": false, 00:20:10.410 "zone_management": false, 00:20:10.410 "zone_append": false, 00:20:10.410 "compare": true, 00:20:10.410 "compare_and_write": false, 00:20:10.410 "abort": true, 00:20:10.410 "seek_hole": false, 00:20:10.410 "seek_data": false, 00:20:10.410 "copy": true, 00:20:10.410 "nvme_iov_md": false 00:20:10.410 }, 00:20:10.410 "driver_specific": { 00:20:10.410 "nvme": [ 00:20:10.410 { 00:20:10.410 "pci_address": "0000:00:11.0", 00:20:10.410 "trid": { 00:20:10.410 "trtype": "PCIe", 00:20:10.410 "traddr": "0000:00:11.0" 00:20:10.410 }, 00:20:10.410 "ctrlr_data": { 00:20:10.410 "cntlid": 0, 00:20:10.410 "vendor_id": "0x1b36", 00:20:10.410 "model_number": "QEMU NVMe Ctrl", 00:20:10.410 "serial_number": "12341", 00:20:10.410 "firmware_revision": "8.0.0", 00:20:10.410 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:10.410 "oacs": { 00:20:10.410 "security": 0, 00:20:10.410 "format": 1, 00:20:10.410 "firmware": 0, 00:20:10.410 "ns_manage": 1 00:20:10.410 }, 00:20:10.410 "multi_ctrlr": false, 00:20:10.410 "ana_reporting": false 00:20:10.410 }, 00:20:10.410 "vs": { 00:20:10.410 "nvme_version": "1.4" 00:20:10.410 }, 00:20:10.410 "ns_data": { 00:20:10.410 "id": 1, 00:20:10.410 "can_share": false 00:20:10.410 } 00:20:10.410 } 00:20:10.410 ], 00:20:10.410 "mp_policy": "active_passive" 00:20:10.410 } 00:20:10.410 } 00:20:10.410 ]' 00:20:10.410 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:10.410 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:10.410 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:10.410 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:10.410 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:10.410 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:20:10.410 05:06:27 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:10.410 05:06:27 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:10.410 05:06:27 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:10.410 05:06:27 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:10.410 05:06:27 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:10.671 05:06:27 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3b50b9a7-4a38-4ee4-9f25-f59840afb90b 00:20:10.671 05:06:27 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:10.671 05:06:27 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b50b9a7-4a38-4ee4-9f25-f59840afb90b 00:20:10.935 05:06:27 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:11.195 05:06:27 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=c543dc97-6bb7-48cd-bf88-5b8e92137259 00:20:11.195 05:06:27 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c543dc97-6bb7-48cd-bf88-5b8e92137259 00:20:11.454 05:06:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.454 05:06:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.454 05:06:27 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:11.454 05:06:27 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:11.454 05:06:27 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.454 05:06:27 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:11.454 05:06:27 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.454 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.454 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:11.454 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:11.454 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:11.454 05:06:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.713 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:11.713 { 00:20:11.713 "name": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:11.713 "aliases": [ 00:20:11.713 "lvs/nvme0n1p0" 00:20:11.713 ], 00:20:11.713 "product_name": "Logical Volume", 00:20:11.713 "block_size": 4096, 00:20:11.713 "num_blocks": 26476544, 00:20:11.713 "uuid": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:11.713 "assigned_rate_limits": { 00:20:11.713 "rw_ios_per_sec": 0, 00:20:11.713 "rw_mbytes_per_sec": 0, 00:20:11.713 "r_mbytes_per_sec": 0, 00:20:11.713 "w_mbytes_per_sec": 0 00:20:11.713 }, 00:20:11.713 "claimed": false, 00:20:11.713 "zoned": false, 00:20:11.713 "supported_io_types": { 00:20:11.713 "read": true, 00:20:11.713 "write": true, 00:20:11.713 "unmap": true, 00:20:11.713 "flush": false, 00:20:11.713 "reset": true, 00:20:11.713 "nvme_admin": false, 00:20:11.713 "nvme_io": false, 00:20:11.713 "nvme_io_md": false, 00:20:11.713 "write_zeroes": true, 00:20:11.713 "zcopy": false, 00:20:11.713 "get_zone_info": false, 00:20:11.713 "zone_management": false, 00:20:11.713 "zone_append": false, 00:20:11.713 "compare": false, 00:20:11.713 "compare_and_write": false, 00:20:11.713 "abort": false, 00:20:11.713 "seek_hole": true, 00:20:11.713 "seek_data": true, 00:20:11.713 "copy": false, 00:20:11.713 "nvme_iov_md": false 00:20:11.713 }, 00:20:11.713 "driver_specific": { 00:20:11.713 "lvol": { 00:20:11.713 "lvol_store_uuid": "c543dc97-6bb7-48cd-bf88-5b8e92137259", 00:20:11.713 "base_bdev": "nvme0n1", 00:20:11.713 "thin_provision": true, 00:20:11.713 "num_allocated_clusters": 0, 00:20:11.713 "snapshot": false, 00:20:11.714 "clone": false, 00:20:11.714 "esnap_clone": false 00:20:11.714 } 00:20:11.714 } 00:20:11.714 } 00:20:11.714 ]' 00:20:11.714 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:11.714 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:11.714 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:11.714 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:11.714 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:11.714 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:11.714 05:06:28 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:11.714 05:06:28 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:11.714 05:06:28 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:11.973 05:06:28 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:11.973 05:06:28 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:11.973 05:06:28 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.973 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:11.973 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:11.973 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:11.973 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:11.973 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:12.231 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:12.231 { 00:20:12.231 "name": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:12.231 "aliases": [ 00:20:12.231 "lvs/nvme0n1p0" 00:20:12.231 ], 00:20:12.231 "product_name": "Logical Volume", 00:20:12.231 "block_size": 4096, 00:20:12.231 "num_blocks": 26476544, 00:20:12.231 "uuid": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:12.231 "assigned_rate_limits": { 00:20:12.231 "rw_ios_per_sec": 0, 00:20:12.231 "rw_mbytes_per_sec": 0, 00:20:12.231 "r_mbytes_per_sec": 0, 00:20:12.231 "w_mbytes_per_sec": 0 00:20:12.231 }, 00:20:12.231 "claimed": false, 00:20:12.231 "zoned": false, 00:20:12.231 "supported_io_types": { 00:20:12.231 "read": true, 00:20:12.231 "write": true, 00:20:12.231 "unmap": true, 00:20:12.231 "flush": false, 00:20:12.231 "reset": true, 00:20:12.231 "nvme_admin": false, 00:20:12.231 "nvme_io": false, 00:20:12.231 "nvme_io_md": false, 00:20:12.231 "write_zeroes": true, 00:20:12.231 "zcopy": false, 00:20:12.231 "get_zone_info": false, 00:20:12.231 "zone_management": false, 00:20:12.231 "zone_append": false, 00:20:12.231 "compare": false, 00:20:12.231 "compare_and_write": false, 00:20:12.231 "abort": false, 00:20:12.231 "seek_hole": true, 00:20:12.231 "seek_data": true, 00:20:12.231 "copy": false, 00:20:12.231 "nvme_iov_md": false 00:20:12.231 }, 00:20:12.231 "driver_specific": { 00:20:12.231 "lvol": { 00:20:12.231 "lvol_store_uuid": "c543dc97-6bb7-48cd-bf88-5b8e92137259", 00:20:12.231 "base_bdev": "nvme0n1", 00:20:12.231 "thin_provision": true, 00:20:12.231 "num_allocated_clusters": 0, 00:20:12.231 "snapshot": false, 00:20:12.231 "clone": false, 00:20:12.231 "esnap_clone": false 00:20:12.231 } 00:20:12.231 } 00:20:12.231 } 00:20:12.231 ]' 00:20:12.231 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:12.231 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:12.231 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:12.231 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:12.231 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:12.231 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:12.231 05:06:28 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:12.231 05:06:28 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:12.489 05:06:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:12.489 05:06:28 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:12.489 05:06:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:12.489 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:12.489 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:12.489 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:12.489 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:12.489 05:06:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa 00:20:12.489 05:06:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:12.489 { 00:20:12.489 "name": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:12.489 "aliases": [ 00:20:12.489 "lvs/nvme0n1p0" 00:20:12.489 ], 00:20:12.489 "product_name": "Logical Volume", 00:20:12.489 "block_size": 4096, 00:20:12.489 "num_blocks": 26476544, 00:20:12.489 "uuid": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:12.489 "assigned_rate_limits": { 00:20:12.489 "rw_ios_per_sec": 0, 00:20:12.489 "rw_mbytes_per_sec": 0, 00:20:12.489 "r_mbytes_per_sec": 0, 00:20:12.489 "w_mbytes_per_sec": 0 00:20:12.489 }, 00:20:12.489 "claimed": false, 00:20:12.489 "zoned": false, 00:20:12.489 "supported_io_types": { 00:20:12.489 "read": true, 00:20:12.489 "write": true, 00:20:12.489 "unmap": true, 00:20:12.489 "flush": false, 00:20:12.489 "reset": true, 00:20:12.489 "nvme_admin": false, 00:20:12.489 "nvme_io": false, 00:20:12.489 "nvme_io_md": false, 00:20:12.489 "write_zeroes": true, 00:20:12.489 "zcopy": false, 00:20:12.489 "get_zone_info": false, 00:20:12.489 "zone_management": false, 00:20:12.489 "zone_append": false, 00:20:12.489 "compare": false, 00:20:12.489 "compare_and_write": false, 00:20:12.489 "abort": false, 00:20:12.489 "seek_hole": true, 00:20:12.489 "seek_data": true, 00:20:12.489 "copy": false, 00:20:12.489 "nvme_iov_md": false 00:20:12.489 }, 00:20:12.489 "driver_specific": { 00:20:12.489 "lvol": { 00:20:12.489 "lvol_store_uuid": "c543dc97-6bb7-48cd-bf88-5b8e92137259", 00:20:12.489 "base_bdev": "nvme0n1", 00:20:12.489 "thin_provision": true, 00:20:12.489 "num_allocated_clusters": 0, 00:20:12.489 "snapshot": false, 00:20:12.489 "clone": false, 00:20:12.489 "esnap_clone": false 00:20:12.489 } 00:20:12.489 } 00:20:12.489 } 00:20:12.489 ]' 00:20:12.489 05:06:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:12.489 05:06:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:12.748 05:06:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:12.748 05:06:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:12.748 05:06:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:12.748 05:06:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:12.748 05:06:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:12.748 05:06:29 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:12.748 [2024-11-21 05:06:29.439116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.439165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:12.748 [2024-11-21 05:06:29.439176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.748 [2024-11-21 05:06:29.439185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.441099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.441139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.748 [2024-11-21 05:06:29.441147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.878 ms 00:20:12.748 [2024-11-21 05:06:29.441155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.441227] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:12.748 [2024-11-21 05:06:29.441396] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:12.748 [2024-11-21 05:06:29.441408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.441417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.748 [2024-11-21 05:06:29.441425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:20:12.748 [2024-11-21 05:06:29.441432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.441512] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:20:12.748 [2024-11-21 05:06:29.442480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.442503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:12.748 [2024-11-21 05:06:29.442512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:12.748 [2024-11-21 05:06:29.442519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.447246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.447268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.748 [2024-11-21 05:06:29.447286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:20:12.748 [2024-11-21 05:06:29.447303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.447395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.447403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.748 [2024-11-21 05:06:29.447414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:12.748 [2024-11-21 05:06:29.447420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.447452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.447459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:12.748 [2024-11-21 05:06:29.447476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:12.748 [2024-11-21 05:06:29.447482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.447523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:12.748 [2024-11-21 05:06:29.448746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.448768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.748 [2024-11-21 05:06:29.448786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:20:12.748 [2024-11-21 05:06:29.448802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.448851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.748 [2024-11-21 05:06:29.448860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:12.748 [2024-11-21 05:06:29.448866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:12.748 [2024-11-21 05:06:29.448883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.748 [2024-11-21 05:06:29.448911] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:12.748 [2024-11-21 05:06:29.449023] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:12.749 [2024-11-21 05:06:29.449037] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:12.749 [2024-11-21 05:06:29.449047] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:12.749 [2024-11-21 05:06:29.449055] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449065] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449071] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:12.749 [2024-11-21 05:06:29.449079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:12.749 [2024-11-21 05:06:29.449085] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:12.749 [2024-11-21 05:06:29.449094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:12.749 [2024-11-21 05:06:29.449108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.749 [2024-11-21 05:06:29.449115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:12.749 [2024-11-21 05:06:29.449121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:12.749 [2024-11-21 05:06:29.449143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.749 [2024-11-21 05:06:29.449233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.749 [2024-11-21 05:06:29.449251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:12.749 [2024-11-21 05:06:29.449265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:12.749 [2024-11-21 05:06:29.449272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.749 [2024-11-21 05:06:29.449383] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:12.749 [2024-11-21 05:06:29.449396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:12.749 [2024-11-21 05:06:29.449403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:12.749 [2024-11-21 05:06:29.449427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:12.749 [2024-11-21 05:06:29.449447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.749 [2024-11-21 05:06:29.449461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:12.749 [2024-11-21 05:06:29.449468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:12.749 [2024-11-21 05:06:29.449474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.749 [2024-11-21 05:06:29.449483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:12.749 [2024-11-21 05:06:29.449490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:12.749 [2024-11-21 05:06:29.449497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:12.749 [2024-11-21 05:06:29.449510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:12.749 [2024-11-21 05:06:29.449532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:12.749 [2024-11-21 05:06:29.449552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:12.749 [2024-11-21 05:06:29.449571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:12.749 [2024-11-21 05:06:29.449593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:12.749 [2024-11-21 05:06:29.449627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.749 [2024-11-21 05:06:29.449640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:12.749 [2024-11-21 05:06:29.449647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:12.749 [2024-11-21 05:06:29.449652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.749 [2024-11-21 05:06:29.449660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:12.749 [2024-11-21 05:06:29.449666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:12.749 [2024-11-21 05:06:29.449673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:12.749 [2024-11-21 05:06:29.449687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:12.749 [2024-11-21 05:06:29.449694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449700] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:12.749 [2024-11-21 05:06:29.449707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:12.749 [2024-11-21 05:06:29.449726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.749 [2024-11-21 05:06:29.449748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:12.749 [2024-11-21 05:06:29.449755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:12.749 [2024-11-21 05:06:29.449762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:12.749 [2024-11-21 05:06:29.449768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:12.749 [2024-11-21 05:06:29.449775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:12.749 [2024-11-21 05:06:29.449780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:12.749 [2024-11-21 05:06:29.449789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:12.749 [2024-11-21 05:06:29.449805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.749 [2024-11-21 05:06:29.449813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:12.749 [2024-11-21 05:06:29.449818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:12.749 [2024-11-21 05:06:29.449825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:12.749 [2024-11-21 05:06:29.449830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:12.749 [2024-11-21 05:06:29.449837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:12.749 [2024-11-21 05:06:29.449842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:12.749 [2024-11-21 05:06:29.449850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:12.749 [2024-11-21 05:06:29.449855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:12.749 [2024-11-21 05:06:29.449861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:12.749 [2024-11-21 05:06:29.449866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:12.749 [2024-11-21 05:06:29.449872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:12.749 [2024-11-21 05:06:29.449878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:12.749 [2024-11-21 05:06:29.449885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:12.749 [2024-11-21 05:06:29.449891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:12.749 [2024-11-21 05:06:29.449897] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:12.749 [2024-11-21 05:06:29.449904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.749 [2024-11-21 05:06:29.449919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:12.749 [2024-11-21 05:06:29.449925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:12.749 [2024-11-21 05:06:29.449932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:12.749 [2024-11-21 05:06:29.449937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:12.749 [2024-11-21 05:06:29.449945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.749 [2024-11-21 05:06:29.449951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:12.749 [2024-11-21 05:06:29.449959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:20:12.749 [2024-11-21 05:06:29.449965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.749 [2024-11-21 05:06:29.450035] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:12.749 [2024-11-21 05:06:29.450051] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:15.278 [2024-11-21 05:06:31.756683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.278 [2024-11-21 05:06:31.756738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:15.278 [2024-11-21 05:06:31.756756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2306.634 ms 00:20:15.278 [2024-11-21 05:06:31.756765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.278 [2024-11-21 05:06:31.764766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.278 [2024-11-21 05:06:31.764802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.278 [2024-11-21 05:06:31.764815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.897 ms 00:20:15.278 [2024-11-21 05:06:31.764836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.278 [2024-11-21 05:06:31.764961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.764971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.279 [2024-11-21 05:06:31.764984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:15.279 [2024-11-21 05:06:31.764991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.782103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.782138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.279 [2024-11-21 05:06:31.782152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.071 ms 00:20:15.279 [2024-11-21 05:06:31.782160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.782237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.782251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.279 [2024-11-21 05:06:31.782262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.279 [2024-11-21 05:06:31.782270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.782573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.782588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.279 [2024-11-21 05:06:31.782599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:15.279 [2024-11-21 05:06:31.782621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.782758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.782769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.279 [2024-11-21 05:06:31.782782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:15.279 [2024-11-21 05:06:31.782790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.788142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.788169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.279 [2024-11-21 05:06:31.788181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.309 ms 00:20:15.279 [2024-11-21 05:06:31.788199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.796410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:15.279 [2024-11-21 05:06:31.810285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.810313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.279 [2024-11-21 05:06:31.810324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.995 ms 00:20:15.279 [2024-11-21 05:06:31.810333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.868887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.868920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:15.279 [2024-11-21 05:06:31.868931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.465 ms 00:20:15.279 [2024-11-21 05:06:31.868945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.869142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.869155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.279 [2024-11-21 05:06:31.869164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:20:15.279 [2024-11-21 05:06:31.869174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.871996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.872026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:15.279 [2024-11-21 05:06:31.872035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.790 ms 00:20:15.279 [2024-11-21 05:06:31.872045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.874466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.874495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:15.279 [2024-11-21 05:06:31.874505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.369 ms 00:20:15.279 [2024-11-21 05:06:31.874515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.874831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.874849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.279 [2024-11-21 05:06:31.874858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:20:15.279 [2024-11-21 05:06:31.874869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.903807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.903840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:15.279 [2024-11-21 05:06:31.903853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.904 ms 00:20:15.279 [2024-11-21 05:06:31.903863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.907634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.907665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:15.279 [2024-11-21 05:06:31.907687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:20:15.279 [2024-11-21 05:06:31.907699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.910673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.910703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:15.279 [2024-11-21 05:06:31.910713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:20:15.279 [2024-11-21 05:06:31.910721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.913948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.913977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.279 [2024-11-21 05:06:31.913986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:20:15.279 [2024-11-21 05:06:31.913996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.914059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.914071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.279 [2024-11-21 05:06:31.914080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:15.279 [2024-11-21 05:06:31.914089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.914170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.279 [2024-11-21 05:06:31.914187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.279 [2024-11-21 05:06:31.914195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:15.279 [2024-11-21 05:06:31.914204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.279 [2024-11-21 05:06:31.915047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.279 [2024-11-21 05:06:31.915991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2475.664 ms, result 0 00:20:15.279 [2024-11-21 05:06:31.916732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.279 { 00:20:15.279 "name": "ftl0", 00:20:15.279 "uuid": "8ac88a35-15dc-41bb-b029-a6466d3507cf" 00:20:15.279 } 00:20:15.279 05:06:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:15.279 05:06:31 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:15.279 05:06:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:15.279 05:06:31 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:15.279 05:06:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:15.279 05:06:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:15.279 05:06:31 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:15.537 05:06:32 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:15.794 [ 00:20:15.794 { 00:20:15.794 "name": "ftl0", 00:20:15.794 "aliases": [ 00:20:15.794 "8ac88a35-15dc-41bb-b029-a6466d3507cf" 00:20:15.794 ], 00:20:15.794 "product_name": "FTL disk", 00:20:15.794 "block_size": 4096, 00:20:15.794 "num_blocks": 23592960, 00:20:15.794 "uuid": "8ac88a35-15dc-41bb-b029-a6466d3507cf", 00:20:15.794 "assigned_rate_limits": { 00:20:15.794 "rw_ios_per_sec": 0, 00:20:15.794 "rw_mbytes_per_sec": 0, 00:20:15.794 "r_mbytes_per_sec": 0, 00:20:15.794 "w_mbytes_per_sec": 0 00:20:15.794 }, 00:20:15.794 "claimed": false, 00:20:15.794 "zoned": false, 00:20:15.794 "supported_io_types": { 00:20:15.794 "read": true, 00:20:15.795 "write": true, 00:20:15.795 "unmap": true, 00:20:15.795 "flush": true, 00:20:15.795 "reset": false, 00:20:15.795 "nvme_admin": false, 00:20:15.795 "nvme_io": false, 00:20:15.795 "nvme_io_md": false, 00:20:15.795 "write_zeroes": true, 00:20:15.795 "zcopy": false, 00:20:15.795 "get_zone_info": false, 00:20:15.795 "zone_management": false, 00:20:15.795 "zone_append": false, 00:20:15.795 "compare": false, 00:20:15.795 "compare_and_write": false, 00:20:15.795 "abort": false, 00:20:15.795 "seek_hole": false, 00:20:15.795 "seek_data": false, 00:20:15.795 "copy": false, 00:20:15.795 "nvme_iov_md": false 00:20:15.795 }, 00:20:15.795 "driver_specific": { 00:20:15.795 "ftl": { 00:20:15.795 "base_bdev": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:15.795 "cache": "nvc0n1p0" 00:20:15.795 } 00:20:15.795 } 00:20:15.795 } 00:20:15.795 ] 00:20:15.795 05:06:32 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:15.795 05:06:32 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:15.795 05:06:32 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:16.052 05:06:32 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:16.052 05:06:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:16.052 05:06:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:16.052 { 00:20:16.052 "name": "ftl0", 00:20:16.052 "aliases": [ 00:20:16.052 "8ac88a35-15dc-41bb-b029-a6466d3507cf" 00:20:16.052 ], 00:20:16.052 "product_name": "FTL disk", 00:20:16.052 "block_size": 4096, 00:20:16.052 "num_blocks": 23592960, 00:20:16.052 "uuid": "8ac88a35-15dc-41bb-b029-a6466d3507cf", 00:20:16.052 "assigned_rate_limits": { 00:20:16.052 "rw_ios_per_sec": 0, 00:20:16.052 "rw_mbytes_per_sec": 0, 00:20:16.052 "r_mbytes_per_sec": 0, 00:20:16.052 "w_mbytes_per_sec": 0 00:20:16.052 }, 00:20:16.052 "claimed": false, 00:20:16.052 "zoned": false, 00:20:16.052 "supported_io_types": { 00:20:16.052 "read": true, 00:20:16.052 "write": true, 00:20:16.052 "unmap": true, 00:20:16.052 "flush": true, 00:20:16.052 "reset": false, 00:20:16.052 "nvme_admin": false, 00:20:16.052 "nvme_io": false, 00:20:16.052 "nvme_io_md": false, 00:20:16.052 "write_zeroes": true, 00:20:16.052 "zcopy": false, 00:20:16.052 "get_zone_info": false, 00:20:16.052 "zone_management": false, 00:20:16.052 "zone_append": false, 00:20:16.052 "compare": false, 00:20:16.052 "compare_and_write": false, 00:20:16.052 "abort": false, 00:20:16.052 "seek_hole": false, 00:20:16.052 "seek_data": false, 00:20:16.052 "copy": false, 00:20:16.052 "nvme_iov_md": false 00:20:16.052 }, 00:20:16.052 "driver_specific": { 00:20:16.052 "ftl": { 00:20:16.052 "base_bdev": "db2b6326-2a98-4cb5-b6a1-9b4ce0d520fa", 00:20:16.052 "cache": "nvc0n1p0" 00:20:16.052 } 00:20:16.052 } 00:20:16.052 } 00:20:16.052 ]' 00:20:16.052 05:06:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:16.052 05:06:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:16.052 05:06:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:16.311 [2024-11-21 05:06:32.953560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.311 [2024-11-21 05:06:32.953585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:16.311 [2024-11-21 05:06:32.953595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.311 [2024-11-21 05:06:32.953602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.311 [2024-11-21 05:06:32.953649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:16.311 [2024-11-21 05:06:32.954021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.311 [2024-11-21 05:06:32.954037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:16.311 [2024-11-21 05:06:32.954044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:20:16.311 [2024-11-21 05:06:32.954052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.311 [2024-11-21 05:06:32.954601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.311 [2024-11-21 05:06:32.954622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:16.311 [2024-11-21 05:06:32.954630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:20:16.311 [2024-11-21 05:06:32.954638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.311 [2024-11-21 05:06:32.957345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.311 [2024-11-21 05:06:32.957363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:16.311 [2024-11-21 05:06:32.957371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:20:16.311 [2024-11-21 05:06:32.957381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.311 [2024-11-21 05:06:32.962758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.311 [2024-11-21 05:06:32.962783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:16.311 [2024-11-21 05:06:32.962790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.335 ms 00:20:16.311 [2024-11-21 05:06:32.962800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.311 [2024-11-21 05:06:32.964280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.311 [2024-11-21 05:06:32.964307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.311 [2024-11-21 05:06:32.964314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:20:16.311 [2024-11-21 05:06:32.964321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.311 [2024-11-21 05:06:32.968630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.311 [2024-11-21 05:06:32.968657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.311 [2024-11-21 05:06:32.968665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:20:16.312 [2024-11-21 05:06:32.968674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.312 [2024-11-21 05:06:32.968840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.312 [2024-11-21 05:06:32.968860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.312 [2024-11-21 05:06:32.968867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:16.312 [2024-11-21 05:06:32.968874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.312 [2024-11-21 05:06:32.970525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.312 [2024-11-21 05:06:32.970551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:16.312 [2024-11-21 05:06:32.970557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:20:16.312 [2024-11-21 05:06:32.970568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.312 [2024-11-21 05:06:32.971887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.312 [2024-11-21 05:06:32.971912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:16.312 [2024-11-21 05:06:32.971919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:20:16.312 [2024-11-21 05:06:32.971926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.312 [2024-11-21 05:06:32.972796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.312 [2024-11-21 05:06:32.972821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.312 [2024-11-21 05:06:32.972828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:20:16.312 [2024-11-21 05:06:32.972835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.312 [2024-11-21 05:06:32.973832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.312 [2024-11-21 05:06:32.973858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.312 [2024-11-21 05:06:32.973865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:20:16.312 [2024-11-21 05:06:32.973871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.312 [2024-11-21 05:06:32.973905] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.312 [2024-11-21 05:06:32.973918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.973994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.312 [2024-11-21 05:06:32.974406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.313 [2024-11-21 05:06:32.974603] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.313 [2024-11-21 05:06:32.974620] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:20:16.313 [2024-11-21 05:06:32.974627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.313 [2024-11-21 05:06:32.974633] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.313 [2024-11-21 05:06:32.974643] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.313 [2024-11-21 05:06:32.974649] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.313 [2024-11-21 05:06:32.974655] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.313 [2024-11-21 05:06:32.974661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.313 [2024-11-21 05:06:32.974668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.313 [2024-11-21 05:06:32.974673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.313 [2024-11-21 05:06:32.974679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.313 [2024-11-21 05:06:32.974685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.313 [2024-11-21 05:06:32.974693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.313 [2024-11-21 05:06:32.974699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:20:16.313 [2024-11-21 05:06:32.974708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.975926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.313 [2024-11-21 05:06:32.975942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.313 [2024-11-21 05:06:32.975949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.192 ms 00:20:16.313 [2024-11-21 05:06:32.975956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.976051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.313 [2024-11-21 05:06:32.976060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.313 [2024-11-21 05:06:32.976067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:16.313 [2024-11-21 05:06:32.976074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.980529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.980558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.313 [2024-11-21 05:06:32.980566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.980574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.980656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.980666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.313 [2024-11-21 05:06:32.980672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.980681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.980740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.980748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.313 [2024-11-21 05:06:32.980755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.980762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.980796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.980803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.313 [2024-11-21 05:06:32.980809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.980816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.988874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.988903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.313 [2024-11-21 05:06:32.988911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.988918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.995823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.995853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.313 [2024-11-21 05:06:32.995861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.995870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.995933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.995944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.313 [2024-11-21 05:06:32.995952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.995967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.996018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.996037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.313 [2024-11-21 05:06:32.996044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.996051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.996122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.996133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.313 [2024-11-21 05:06:32.996141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.996148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.996201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.996211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.313 [2024-11-21 05:06:32.996217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.996225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.996278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.996287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.313 [2024-11-21 05:06:32.996293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.996302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.996352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.313 [2024-11-21 05:06:32.996362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.313 [2024-11-21 05:06:32.996377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.313 [2024-11-21 05:06:32.996385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.313 [2024-11-21 05:06:32.996542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.971 ms, result 0 00:20:16.313 true 00:20:16.313 05:06:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87748 00:20:16.313 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87748 ']' 00:20:16.313 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87748 00:20:16.313 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:16.313 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.314 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87748 00:20:16.314 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:16.314 killing process with pid 87748 00:20:16.314 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:16.314 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87748' 00:20:16.314 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87748 00:20:16.314 05:06:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87748 00:20:21.575 05:06:37 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:22.146 65536+0 records in 00:20:22.146 65536+0 records out 00:20:22.146 268435456 bytes (268 MB, 256 MiB) copied, 0.806316 s, 333 MB/s 00:20:22.146 05:06:38 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:22.146 [2024-11-21 05:06:38.835630] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:20:22.146 [2024-11-21 05:06:38.835758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87903 ] 00:20:22.406 [2024-11-21 05:06:38.989386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.406 [2024-11-21 05:06:39.008072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.406 [2024-11-21 05:06:39.090075] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.406 [2024-11-21 05:06:39.090121] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.667 [2024-11-21 05:06:39.236945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.236978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:22.667 [2024-11-21 05:06:39.236989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:22.667 [2024-11-21 05:06:39.236996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.238735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.238760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.667 [2024-11-21 05:06:39.238770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:20:22.667 [2024-11-21 05:06:39.238776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.238831] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:22.667 [2024-11-21 05:06:39.239003] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:22.667 [2024-11-21 05:06:39.239017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.239024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.667 [2024-11-21 05:06:39.239034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:22.667 [2024-11-21 05:06:39.239039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.240011] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:22.667 [2024-11-21 05:06:39.242110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.242140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:22.667 [2024-11-21 05:06:39.242148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:20:22.667 [2024-11-21 05:06:39.242156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.242202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.242210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:22.667 [2024-11-21 05:06:39.242216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:22.667 [2024-11-21 05:06:39.242222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.246631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.246655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.667 [2024-11-21 05:06:39.246664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.378 ms 00:20:22.667 [2024-11-21 05:06:39.246669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.246757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.246765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.667 [2024-11-21 05:06:39.246772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:22.667 [2024-11-21 05:06:39.246778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.246801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.246810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:22.667 [2024-11-21 05:06:39.246818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:22.667 [2024-11-21 05:06:39.246824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.246839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:22.667 [2024-11-21 05:06:39.247983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.248004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.667 [2024-11-21 05:06:39.248011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:20:22.667 [2024-11-21 05:06:39.248020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.248049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.248058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:22.667 [2024-11-21 05:06:39.248069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:22.667 [2024-11-21 05:06:39.248075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.248090] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:22.667 [2024-11-21 05:06:39.248104] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:22.667 [2024-11-21 05:06:39.248133] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:22.667 [2024-11-21 05:06:39.248146] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:22.667 [2024-11-21 05:06:39.248224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:22.667 [2024-11-21 05:06:39.248232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:22.667 [2024-11-21 05:06:39.248239] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:22.667 [2024-11-21 05:06:39.248247] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:22.667 [2024-11-21 05:06:39.248253] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:22.667 [2024-11-21 05:06:39.248262] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:22.667 [2024-11-21 05:06:39.248268] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:22.667 [2024-11-21 05:06:39.248274] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:22.667 [2024-11-21 05:06:39.248279] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:22.667 [2024-11-21 05:06:39.248287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.248293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:22.667 [2024-11-21 05:06:39.248299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:22.667 [2024-11-21 05:06:39.248305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.248370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.667 [2024-11-21 05:06:39.248377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:22.667 [2024-11-21 05:06:39.248385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:22.667 [2024-11-21 05:06:39.248391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.667 [2024-11-21 05:06:39.248463] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:22.667 [2024-11-21 05:06:39.248470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:22.667 [2024-11-21 05:06:39.248482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.667 [2024-11-21 05:06:39.248491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.667 [2024-11-21 05:06:39.248497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:22.667 [2024-11-21 05:06:39.248502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:22.667 [2024-11-21 05:06:39.248507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:22.667 [2024-11-21 05:06:39.248513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:22.667 [2024-11-21 05:06:39.248520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:22.667 [2024-11-21 05:06:39.248525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.667 [2024-11-21 05:06:39.248530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:22.667 [2024-11-21 05:06:39.248535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:22.667 [2024-11-21 05:06:39.248539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.667 [2024-11-21 05:06:39.248546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:22.668 [2024-11-21 05:06:39.248552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:22.668 [2024-11-21 05:06:39.248556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:22.668 [2024-11-21 05:06:39.248566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:22.668 [2024-11-21 05:06:39.248571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:22.668 [2024-11-21 05:06:39.248581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.668 [2024-11-21 05:06:39.248591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:22.668 [2024-11-21 05:06:39.248596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.668 [2024-11-21 05:06:39.248619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:22.668 [2024-11-21 05:06:39.248625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.668 [2024-11-21 05:06:39.248635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:22.668 [2024-11-21 05:06:39.248641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.668 [2024-11-21 05:06:39.248653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:22.668 [2024-11-21 05:06:39.248659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.668 [2024-11-21 05:06:39.248671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:22.668 [2024-11-21 05:06:39.248677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:22.668 [2024-11-21 05:06:39.248683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.668 [2024-11-21 05:06:39.248688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:22.668 [2024-11-21 05:06:39.248694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:22.668 [2024-11-21 05:06:39.248700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:22.668 [2024-11-21 05:06:39.248713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:22.668 [2024-11-21 05:06:39.248720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248725] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:22.668 [2024-11-21 05:06:39.248732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:22.668 [2024-11-21 05:06:39.248742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.668 [2024-11-21 05:06:39.248748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.668 [2024-11-21 05:06:39.248755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:22.668 [2024-11-21 05:06:39.248761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:22.668 [2024-11-21 05:06:39.248767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:22.668 [2024-11-21 05:06:39.248773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:22.668 [2024-11-21 05:06:39.248778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:22.668 [2024-11-21 05:06:39.248784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:22.668 [2024-11-21 05:06:39.248791] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:22.668 [2024-11-21 05:06:39.248799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.668 [2024-11-21 05:06:39.248806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:22.668 [2024-11-21 05:06:39.248813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:22.668 [2024-11-21 05:06:39.248820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:22.668 [2024-11-21 05:06:39.248826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:22.668 [2024-11-21 05:06:39.248832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:22.668 [2024-11-21 05:06:39.248839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:22.668 [2024-11-21 05:06:39.248845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:22.668 [2024-11-21 05:06:39.248851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:22.668 [2024-11-21 05:06:39.248857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:22.668 [2024-11-21 05:06:39.248864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:22.668 [2024-11-21 05:06:39.248870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:22.668 [2024-11-21 05:06:39.248876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:22.668 [2024-11-21 05:06:39.248882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:22.668 [2024-11-21 05:06:39.248888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:22.668 [2024-11-21 05:06:39.248894] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:22.668 [2024-11-21 05:06:39.248901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.668 [2024-11-21 05:06:39.248910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:22.668 [2024-11-21 05:06:39.248917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:22.668 [2024-11-21 05:06:39.248924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:22.668 [2024-11-21 05:06:39.248930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:22.668 [2024-11-21 05:06:39.248936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.248942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:22.668 [2024-11-21 05:06:39.248950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:20:22.668 [2024-11-21 05:06:39.248957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.256852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.256880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.668 [2024-11-21 05:06:39.256887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.858 ms 00:20:22.668 [2024-11-21 05:06:39.256894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.256983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.257000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:22.668 [2024-11-21 05:06:39.257007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:22.668 [2024-11-21 05:06:39.257012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.276663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.276714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.668 [2024-11-21 05:06:39.276731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.631 ms 00:20:22.668 [2024-11-21 05:06:39.276748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.276851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.276872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.668 [2024-11-21 05:06:39.276884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.668 [2024-11-21 05:06:39.276894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.277281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.277317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.668 [2024-11-21 05:06:39.277333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:20:22.668 [2024-11-21 05:06:39.277345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.277527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.277547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.668 [2024-11-21 05:06:39.277564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:20:22.668 [2024-11-21 05:06:39.277576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.283414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.283443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.668 [2024-11-21 05:06:39.283452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.808 ms 00:20:22.668 [2024-11-21 05:06:39.283459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.285777] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:22.668 [2024-11-21 05:06:39.285814] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:22.668 [2024-11-21 05:06:39.285824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.285832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:22.668 [2024-11-21 05:06:39.285840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:20:22.668 [2024-11-21 05:06:39.285847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.668 [2024-11-21 05:06:39.298791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.668 [2024-11-21 05:06:39.298824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.669 [2024-11-21 05:06:39.298835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.901 ms 00:20:22.669 [2024-11-21 05:06:39.298840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.300471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.300497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.669 [2024-11-21 05:06:39.300504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:20:22.669 [2024-11-21 05:06:39.300509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.302206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.302230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.669 [2024-11-21 05:06:39.302242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:20:22.669 [2024-11-21 05:06:39.302247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.302496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.302509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.669 [2024-11-21 05:06:39.302519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:22.669 [2024-11-21 05:06:39.302524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.319240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.319268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.669 [2024-11-21 05:06:39.319277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.697 ms 00:20:22.669 [2024-11-21 05:06:39.319287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.325273] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:22.669 [2024-11-21 05:06:39.337070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.337095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.669 [2024-11-21 05:06:39.337103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.735 ms 00:20:22.669 [2024-11-21 05:06:39.337110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.337190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.337199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.669 [2024-11-21 05:06:39.337206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:22.669 [2024-11-21 05:06:39.337211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.337246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.337252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.669 [2024-11-21 05:06:39.337258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:22.669 [2024-11-21 05:06:39.337264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.337281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.337287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.669 [2024-11-21 05:06:39.337293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.669 [2024-11-21 05:06:39.337298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.337321] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.669 [2024-11-21 05:06:39.337330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.337336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.669 [2024-11-21 05:06:39.337342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:22.669 [2024-11-21 05:06:39.337347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.341034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.341059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.669 [2024-11-21 05:06:39.341067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:20:22.669 [2024-11-21 05:06:39.341072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.341147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.669 [2024-11-21 05:06:39.341157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.669 [2024-11-21 05:06:39.341164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:22.669 [2024-11-21 05:06:39.341170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.669 [2024-11-21 05:06:39.342181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.669 [2024-11-21 05:06:39.343019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.026 ms, result 0 00:20:22.669 [2024-11-21 05:06:39.343673] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.669 [2024-11-21 05:06:39.353725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.051  [2024-11-21T05:06:41.356Z] Copying: 19/256 [MB] (19 MBps) [2024-11-21T05:06:42.741Z] Copying: 41/256 [MB] (21 MBps) [2024-11-21T05:06:43.685Z] Copying: 60/256 [MB] (19 MBps) [2024-11-21T05:06:44.627Z] Copying: 78/256 [MB] (17 MBps) [2024-11-21T05:06:45.569Z] Copying: 93/256 [MB] (15 MBps) [2024-11-21T05:06:46.515Z] Copying: 113/256 [MB] (19 MBps) [2024-11-21T05:06:47.457Z] Copying: 130/256 [MB] (17 MBps) [2024-11-21T05:06:48.399Z] Copying: 148/256 [MB] (17 MBps) [2024-11-21T05:06:49.785Z] Copying: 161/256 [MB] (13 MBps) [2024-11-21T05:06:50.357Z] Copying: 177/256 [MB] (15 MBps) [2024-11-21T05:06:51.743Z] Copying: 192/256 [MB] (14 MBps) [2024-11-21T05:06:52.688Z] Copying: 215/256 [MB] (22 MBps) [2024-11-21T05:06:53.683Z] Copying: 227/256 [MB] (12 MBps) [2024-11-21T05:06:54.258Z] Copying: 242/256 [MB] (14 MBps) [2024-11-21T05:06:54.258Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-21 05:06:54.189707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.524 [2024-11-21 05:06:54.190666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.190688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.524 [2024-11-21 05:06:54.190702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:37.524 [2024-11-21 05:06:54.190708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.190723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.524 [2024-11-21 05:06:54.191073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.191086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.524 [2024-11-21 05:06:54.191093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:20:37.524 [2024-11-21 05:06:54.191099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.193323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.193352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.524 [2024-11-21 05:06:54.193359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:20:37.524 [2024-11-21 05:06:54.193365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.199560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.199587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.524 [2024-11-21 05:06:54.199595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:20:37.524 [2024-11-21 05:06:54.199606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.204834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.204864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.524 [2024-11-21 05:06:54.204871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.183 ms 00:20:37.524 [2024-11-21 05:06:54.204877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.206914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.206942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.524 [2024-11-21 05:06:54.206949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:20:37.524 [2024-11-21 05:06:54.206955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.211318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.211346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.524 [2024-11-21 05:06:54.211357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.337 ms 00:20:37.524 [2024-11-21 05:06:54.211363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.211453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.211460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.524 [2024-11-21 05:06:54.211467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:37.524 [2024-11-21 05:06:54.211473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.214186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.214213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.524 [2024-11-21 05:06:54.214220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.697 ms 00:20:37.524 [2024-11-21 05:06:54.214225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.216515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.216540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.524 [2024-11-21 05:06:54.216547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.265 ms 00:20:37.524 [2024-11-21 05:06:54.216552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.218229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.218255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.524 [2024-11-21 05:06:54.218261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:20:37.524 [2024-11-21 05:06:54.218266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.220013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.524 [2024-11-21 05:06:54.220039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.524 [2024-11-21 05:06:54.220045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:20:37.524 [2024-11-21 05:06:54.220051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.524 [2024-11-21 05:06:54.220074] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.524 [2024-11-21 05:06:54.220085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.524 [2024-11-21 05:06:54.220327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.525 [2024-11-21 05:06:54.220680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.525 [2024-11-21 05:06:54.220687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:20:37.525 [2024-11-21 05:06:54.220693] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.525 [2024-11-21 05:06:54.220698] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.525 [2024-11-21 05:06:54.220704] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.525 [2024-11-21 05:06:54.220710] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.525 [2024-11-21 05:06:54.220716] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.525 [2024-11-21 05:06:54.220722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.525 [2024-11-21 05:06:54.220727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.525 [2024-11-21 05:06:54.220732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.525 [2024-11-21 05:06:54.220737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.525 [2024-11-21 05:06:54.220742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.525 [2024-11-21 05:06:54.220750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.525 [2024-11-21 05:06:54.220759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:20:37.525 [2024-11-21 05:06:54.220765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.525 [2024-11-21 05:06:54.221978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.525 [2024-11-21 05:06:54.221997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.525 [2024-11-21 05:06:54.222004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:20:37.525 [2024-11-21 05:06:54.222009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.525 [2024-11-21 05:06:54.222079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.525 [2024-11-21 05:06:54.222086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.525 [2024-11-21 05:06:54.222092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:37.525 [2024-11-21 05:06:54.222097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.525 [2024-11-21 05:06:54.226471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.525 [2024-11-21 05:06:54.226499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.525 [2024-11-21 05:06:54.226506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.525 [2024-11-21 05:06:54.226512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.525 [2024-11-21 05:06:54.226557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.525 [2024-11-21 05:06:54.226563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.525 [2024-11-21 05:06:54.226569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.525 [2024-11-21 05:06:54.226575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.525 [2024-11-21 05:06:54.226603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.525 [2024-11-21 05:06:54.226624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.525 [2024-11-21 05:06:54.226630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.525 [2024-11-21 05:06:54.226635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.525 [2024-11-21 05:06:54.226647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.525 [2024-11-21 05:06:54.226659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.525 [2024-11-21 05:06:54.226667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.525 [2024-11-21 05:06:54.226672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.525 [2024-11-21 05:06:54.234182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.525 [2024-11-21 05:06:54.234216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.526 [2024-11-21 05:06:54.234223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.234229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.526 [2024-11-21 05:06:54.240428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.526 [2024-11-21 05:06:54.240436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.240442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.526 [2024-11-21 05:06:54.240495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.526 [2024-11-21 05:06:54.240501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.240507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.526 [2024-11-21 05:06:54.240535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.526 [2024-11-21 05:06:54.240542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.240548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.526 [2024-11-21 05:06:54.240617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.526 [2024-11-21 05:06:54.240624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.240630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.526 [2024-11-21 05:06:54.240665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.526 [2024-11-21 05:06:54.240671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.240682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.526 [2024-11-21 05:06:54.240722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.526 [2024-11-21 05:06:54.240728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.240733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.526 [2024-11-21 05:06:54.240774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.526 [2024-11-21 05:06:54.240783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.526 [2024-11-21 05:06:54.240789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-21 05:06:54.240886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.204 ms, result 0 00:20:38.098 00:20:38.098 00:20:38.098 05:06:54 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=88076 00:20:38.098 05:06:54 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 88076 00:20:38.098 05:06:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88076 ']' 00:20:38.098 05:06:54 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.098 05:06:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.098 05:06:54 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.098 05:06:54 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.098 05:06:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:38.098 05:06:54 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:38.098 [2024-11-21 05:06:54.810050] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:20:38.098 [2024-11-21 05:06:54.810522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88076 ] 00:20:38.360 [2024-11-21 05:06:54.965225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.360 [2024-11-21 05:06:54.986225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.933 05:06:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.933 05:06:55 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:38.933 05:06:55 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:39.194 [2024-11-21 05:06:55.841043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.194 [2024-11-21 05:06:55.841088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.457 [2024-11-21 05:06:56.006460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.457 [2024-11-21 05:06:56.006495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.457 [2024-11-21 05:06:56.006505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:39.457 [2024-11-21 05:06:56.006513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.457 [2024-11-21 05:06:56.008405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.457 [2024-11-21 05:06:56.008442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.457 [2024-11-21 05:06:56.008451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.877 ms 00:20:39.457 [2024-11-21 05:06:56.008458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.457 [2024-11-21 05:06:56.008522] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.457 [2024-11-21 05:06:56.008715] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.457 [2024-11-21 05:06:56.008728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.457 [2024-11-21 05:06:56.008738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.457 [2024-11-21 05:06:56.008745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:39.457 [2024-11-21 05:06:56.008752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.457 [2024-11-21 05:06:56.009813] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:39.457 [2024-11-21 05:06:56.012090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.457 [2024-11-21 05:06:56.012116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:39.457 [2024-11-21 05:06:56.012131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:20:39.457 [2024-11-21 05:06:56.012136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.457 [2024-11-21 05:06:56.012182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.457 [2024-11-21 05:06:56.012192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:39.457 [2024-11-21 05:06:56.012202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:39.457 [2024-11-21 05:06:56.012208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.457 [2024-11-21 05:06:56.016504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.457 [2024-11-21 05:06:56.016528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.457 [2024-11-21 05:06:56.016537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:20:39.457 [2024-11-21 05:06:56.016545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.457 [2024-11-21 05:06:56.016637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.458 [2024-11-21 05:06:56.016644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.458 [2024-11-21 05:06:56.016652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:39.458 [2024-11-21 05:06:56.016660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.458 [2024-11-21 05:06:56.016681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.458 [2024-11-21 05:06:56.016688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.458 [2024-11-21 05:06:56.016697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:39.458 [2024-11-21 05:06:56.016706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.458 [2024-11-21 05:06:56.016724] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:39.458 [2024-11-21 05:06:56.017864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.458 [2024-11-21 05:06:56.017889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.458 [2024-11-21 05:06:56.017896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:20:39.458 [2024-11-21 05:06:56.017905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.458 [2024-11-21 05:06:56.017931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.458 [2024-11-21 05:06:56.017939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.458 [2024-11-21 05:06:56.017944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:39.458 [2024-11-21 05:06:56.017951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.458 [2024-11-21 05:06:56.017966] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:39.458 [2024-11-21 05:06:56.017981] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:39.458 [2024-11-21 05:06:56.018012] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:39.458 [2024-11-21 05:06:56.018028] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:39.458 [2024-11-21 05:06:56.018110] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.458 [2024-11-21 05:06:56.018120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.458 [2024-11-21 05:06:56.018128] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.458 [2024-11-21 05:06:56.018138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018146] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018156] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:39.458 [2024-11-21 05:06:56.018162] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.458 [2024-11-21 05:06:56.018168] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.458 [2024-11-21 05:06:56.018174] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.458 [2024-11-21 05:06:56.018183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.458 [2024-11-21 05:06:56.018189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.458 [2024-11-21 05:06:56.018196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:20:39.458 [2024-11-21 05:06:56.018204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.458 [2024-11-21 05:06:56.018273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.458 [2024-11-21 05:06:56.018279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.458 [2024-11-21 05:06:56.018286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:39.458 [2024-11-21 05:06:56.018291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.458 [2024-11-21 05:06:56.018373] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.458 [2024-11-21 05:06:56.018383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.458 [2024-11-21 05:06:56.018393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.458 [2024-11-21 05:06:56.018415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.458 [2024-11-21 05:06:56.018440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.458 [2024-11-21 05:06:56.018451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.458 [2024-11-21 05:06:56.018456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:39.458 [2024-11-21 05:06:56.018463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.458 [2024-11-21 05:06:56.018470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.458 [2024-11-21 05:06:56.018476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:39.458 [2024-11-21 05:06:56.018482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.458 [2024-11-21 05:06:56.018494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.458 [2024-11-21 05:06:56.018513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.458 [2024-11-21 05:06:56.018533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.458 [2024-11-21 05:06:56.018554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.458 [2024-11-21 05:06:56.018576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.458 [2024-11-21 05:06:56.018596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.458 [2024-11-21 05:06:56.018622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.458 [2024-11-21 05:06:56.018628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:39.458 [2024-11-21 05:06:56.018636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.458 [2024-11-21 05:06:56.018642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.458 [2024-11-21 05:06:56.018651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:39.458 [2024-11-21 05:06:56.018657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.458 [2024-11-21 05:06:56.018671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:39.458 [2024-11-21 05:06:56.018679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018685] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.458 [2024-11-21 05:06:56.018693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.458 [2024-11-21 05:06:56.018700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.458 [2024-11-21 05:06:56.018714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.458 [2024-11-21 05:06:56.018722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.458 [2024-11-21 05:06:56.018728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.458 [2024-11-21 05:06:56.018735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.458 [2024-11-21 05:06:56.018742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.458 [2024-11-21 05:06:56.018751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.458 [2024-11-21 05:06:56.018758] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.458 [2024-11-21 05:06:56.018769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.458 [2024-11-21 05:06:56.018777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:39.458 [2024-11-21 05:06:56.018785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:39.458 [2024-11-21 05:06:56.018791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:39.458 [2024-11-21 05:06:56.018800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:39.458 [2024-11-21 05:06:56.018807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:39.458 [2024-11-21 05:06:56.018816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:39.458 [2024-11-21 05:06:56.018823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:39.458 [2024-11-21 05:06:56.018830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:39.458 [2024-11-21 05:06:56.018837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:39.459 [2024-11-21 05:06:56.018845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:39.459 [2024-11-21 05:06:56.018852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:39.459 [2024-11-21 05:06:56.018860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:39.459 [2024-11-21 05:06:56.018868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:39.459 [2024-11-21 05:06:56.018876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:39.459 [2024-11-21 05:06:56.018883] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.459 [2024-11-21 05:06:56.018896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.459 [2024-11-21 05:06:56.018903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.459 [2024-11-21 05:06:56.018911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.459 [2024-11-21 05:06:56.018918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.459 [2024-11-21 05:06:56.018925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.459 [2024-11-21 05:06:56.018931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.018938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.459 [2024-11-21 05:06:56.018945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:20:39.459 [2024-11-21 05:06:56.018952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.026728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.026758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.459 [2024-11-21 05:06:56.026765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.733 ms 00:20:39.459 [2024-11-21 05:06:56.026772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.026859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.026870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.459 [2024-11-21 05:06:56.026876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:39.459 [2024-11-21 05:06:56.026885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.034223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.034251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.459 [2024-11-21 05:06:56.034260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.323 ms 00:20:39.459 [2024-11-21 05:06:56.034268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.034301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.034309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.459 [2024-11-21 05:06:56.034318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:39.459 [2024-11-21 05:06:56.034325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.034623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.034645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.459 [2024-11-21 05:06:56.034656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:39.459 [2024-11-21 05:06:56.034663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.034762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.034782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.459 [2024-11-21 05:06:56.034788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:39.459 [2024-11-21 05:06:56.034802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.039520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.039547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.459 [2024-11-21 05:06:56.039557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.702 ms 00:20:39.459 [2024-11-21 05:06:56.039566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.041932] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:39.459 [2024-11-21 05:06:56.041960] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:39.459 [2024-11-21 05:06:56.041969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.041977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:39.459 [2024-11-21 05:06:56.041983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.318 ms 00:20:39.459 [2024-11-21 05:06:56.041990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.053298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.053329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:39.459 [2024-11-21 05:06:56.053339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.274 ms 00:20:39.459 [2024-11-21 05:06:56.053347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.055279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.055307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:39.459 [2024-11-21 05:06:56.055315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:20:39.459 [2024-11-21 05:06:56.055322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.057004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.057031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:39.459 [2024-11-21 05:06:56.057038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:20:39.459 [2024-11-21 05:06:56.057045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.057300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.057311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.459 [2024-11-21 05:06:56.057317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:39.459 [2024-11-21 05:06:56.057324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.087545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.087590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:39.459 [2024-11-21 05:06:56.087602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.204 ms 00:20:39.459 [2024-11-21 05:06:56.087627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.093781] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:39.459 [2024-11-21 05:06:56.105181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.105208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.459 [2024-11-21 05:06:56.105219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.481 ms 00:20:39.459 [2024-11-21 05:06:56.105225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.105293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.105301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:39.459 [2024-11-21 05:06:56.105311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:39.459 [2024-11-21 05:06:56.105317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.105354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.105362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.459 [2024-11-21 05:06:56.105371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:39.459 [2024-11-21 05:06:56.105377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.105396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.105403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.459 [2024-11-21 05:06:56.105412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:39.459 [2024-11-21 05:06:56.105419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.105443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:39.459 [2024-11-21 05:06:56.105451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.105457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:39.459 [2024-11-21 05:06:56.105466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:39.459 [2024-11-21 05:06:56.105474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.108990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.109020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.459 [2024-11-21 05:06:56.109028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:20:39.459 [2024-11-21 05:06:56.109037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.109093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.459 [2024-11-21 05:06:56.109102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.459 [2024-11-21 05:06:56.109111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:39.459 [2024-11-21 05:06:56.109118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.459 [2024-11-21 05:06:56.109822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.459 [2024-11-21 05:06:56.110603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.155 ms, result 0 00:20:39.459 [2024-11-21 05:06:56.112583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.459 Some configs were skipped because the RPC state that can call them passed over. 00:20:39.459 05:06:56 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:39.721 [2024-11-21 05:06:56.340069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.721 [2024-11-21 05:06:56.340099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:39.721 [2024-11-21 05:06:56.340110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:20:39.721 [2024-11-21 05:06:56.340116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.721 [2024-11-21 05:06:56.340141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.553 ms, result 0 00:20:39.721 true 00:20:39.721 05:06:56 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:39.983 [2024-11-21 05:06:56.548646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.983 [2024-11-21 05:06:56.548779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:39.983 [2024-11-21 05:06:56.548814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:20:39.983 [2024-11-21 05:06:56.548842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.983 [2024-11-21 05:06:56.548952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.093 ms, result 0 00:20:39.983 true 00:20:39.983 05:06:56 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 88076 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88076 ']' 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88076 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88076 00:20:39.983 killing process with pid 88076 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88076' 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88076 00:20:39.983 05:06:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88076 00:20:40.246 [2024-11-21 05:06:56.749146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.246 [2024-11-21 05:06:56.749220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:40.246 [2024-11-21 05:06:56.749236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.246 [2024-11-21 05:06:56.749246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.246 [2024-11-21 05:06:56.749277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:40.246 [2024-11-21 05:06:56.750150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.246 [2024-11-21 05:06:56.750200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:40.246 [2024-11-21 05:06:56.750216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:20:40.246 [2024-11-21 05:06:56.750227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.246 [2024-11-21 05:06:56.750562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.246 [2024-11-21 05:06:56.750577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:40.246 [2024-11-21 05:06:56.750586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:40.246 [2024-11-21 05:06:56.750602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.246 [2024-11-21 05:06:56.755116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.246 [2024-11-21 05:06:56.755163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:40.246 [2024-11-21 05:06:56.755183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:20:40.246 [2024-11-21 05:06:56.755195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.246 [2024-11-21 05:06:56.762264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.246 [2024-11-21 05:06:56.762318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:40.246 [2024-11-21 05:06:56.762330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.025 ms 00:20:40.246 [2024-11-21 05:06:56.762343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.246 [2024-11-21 05:06:56.765533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.246 [2024-11-21 05:06:56.765595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:40.246 [2024-11-21 05:06:56.765624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:20:40.246 [2024-11-21 05:06:56.765634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.246 [2024-11-21 05:06:56.771035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.246 [2024-11-21 05:06:56.771100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:40.246 [2024-11-21 05:06:56.771112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.347 ms 00:20:40.246 [2024-11-21 05:06:56.771125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.247 [2024-11-21 05:06:56.771292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.247 [2024-11-21 05:06:56.771307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:40.247 [2024-11-21 05:06:56.771316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:40.247 [2024-11-21 05:06:56.771327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.247 [2024-11-21 05:06:56.774847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.247 [2024-11-21 05:06:56.774905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:40.247 [2024-11-21 05:06:56.774916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:20:40.247 [2024-11-21 05:06:56.774932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.247 [2024-11-21 05:06:56.777565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.247 [2024-11-21 05:06:56.777641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:40.247 [2024-11-21 05:06:56.777652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:20:40.247 [2024-11-21 05:06:56.777662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.247 [2024-11-21 05:06:56.779999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.247 [2024-11-21 05:06:56.780056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:40.247 [2024-11-21 05:06:56.780067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:20:40.247 [2024-11-21 05:06:56.780077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.247 [2024-11-21 05:06:56.782044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.247 [2024-11-21 05:06:56.782103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:40.247 [2024-11-21 05:06:56.782113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:20:40.247 [2024-11-21 05:06:56.782122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.247 [2024-11-21 05:06:56.782171] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:40.247 [2024-11-21 05:06:56.782191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:40.247 [2024-11-21 05:06:56.782888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.782985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:40.248 [2024-11-21 05:06:56.783165] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:40.248 [2024-11-21 05:06:56.783174] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:20:40.248 [2024-11-21 05:06:56.783184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:40.248 [2024-11-21 05:06:56.783196] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:40.248 [2024-11-21 05:06:56.783205] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:40.248 [2024-11-21 05:06:56.783214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:40.248 [2024-11-21 05:06:56.783224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:40.248 [2024-11-21 05:06:56.783237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:40.248 [2024-11-21 05:06:56.783247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:40.248 [2024-11-21 05:06:56.783254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:40.248 [2024-11-21 05:06:56.783263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:40.248 [2024-11-21 05:06:56.783270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.248 [2024-11-21 05:06:56.783280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:40.248 [2024-11-21 05:06:56.783289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:20:40.248 [2024-11-21 05:06:56.783303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.786321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.248 [2024-11-21 05:06:56.786359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:40.248 [2024-11-21 05:06:56.786370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:20:40.248 [2024-11-21 05:06:56.786381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.786538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.248 [2024-11-21 05:06:56.786549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:40.248 [2024-11-21 05:06:56.786559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:40.248 [2024-11-21 05:06:56.786569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.797690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.797748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.248 [2024-11-21 05:06:56.797760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.797771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.797872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.797891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.248 [2024-11-21 05:06:56.797901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.797915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.797968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.797981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.248 [2024-11-21 05:06:56.797994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.798009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.798031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.798042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.248 [2024-11-21 05:06:56.798051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.798062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.819422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.819491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.248 [2024-11-21 05:06:56.819510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.819522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.835633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.835701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.248 [2024-11-21 05:06:56.835714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.835729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.835808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.835826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.248 [2024-11-21 05:06:56.835835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.835846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.835887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.835899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.248 [2024-11-21 05:06:56.835913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.835924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.836015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.836032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.248 [2024-11-21 05:06:56.836044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.836054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.836092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.836105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:40.248 [2024-11-21 05:06:56.836115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.836127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.836182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.836195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.248 [2024-11-21 05:06:56.836207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.836219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.836284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:40.248 [2024-11-21 05:06:56.836299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.248 [2024-11-21 05:06:56.836309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:40.248 [2024-11-21 05:06:56.836321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.248 [2024-11-21 05:06:56.836508] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.339 ms, result 0 00:20:40.510 05:06:57 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:40.510 05:06:57 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:40.510 [2024-11-21 05:06:57.241816] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:20:40.510 [2024-11-21 05:06:57.241989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88113 ] 00:20:40.772 [2024-11-21 05:06:57.405561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.772 [2024-11-21 05:06:57.447305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.033 [2024-11-21 05:06:57.599147] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.033 [2024-11-21 05:06:57.599251] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.033 [2024-11-21 05:06:57.763143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.033 [2024-11-21 05:06:57.763209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:41.033 [2024-11-21 05:06:57.763226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:41.033 [2024-11-21 05:06:57.763236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.766047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.766106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.296 [2024-11-21 05:06:57.766122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.790 ms 00:20:41.296 [2024-11-21 05:06:57.766131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.766244] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:41.296 [2024-11-21 05:06:57.766541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:41.296 [2024-11-21 05:06:57.766557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.766569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.296 [2024-11-21 05:06:57.766579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:20:41.296 [2024-11-21 05:06:57.766588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.769005] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:41.296 [2024-11-21 05:06:57.773907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.774121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:41.296 [2024-11-21 05:06:57.774152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:20:41.296 [2024-11-21 05:06:57.774162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.774279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.774294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:41.296 [2024-11-21 05:06:57.774304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:41.296 [2024-11-21 05:06:57.774320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.786221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.786270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.296 [2024-11-21 05:06:57.786283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.849 ms 00:20:41.296 [2024-11-21 05:06:57.786292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.786461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.786474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.296 [2024-11-21 05:06:57.786485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:41.296 [2024-11-21 05:06:57.786493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.786527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.786537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:41.296 [2024-11-21 05:06:57.786549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:41.296 [2024-11-21 05:06:57.786563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.786586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:41.296 [2024-11-21 05:06:57.789343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.789534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.296 [2024-11-21 05:06:57.789554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:20:41.296 [2024-11-21 05:06:57.789563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.789652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.789664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:41.296 [2024-11-21 05:06:57.789681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:41.296 [2024-11-21 05:06:57.789694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.789718] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:41.296 [2024-11-21 05:06:57.789747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:41.296 [2024-11-21 05:06:57.789790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:41.296 [2024-11-21 05:06:57.789812] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:41.296 [2024-11-21 05:06:57.789928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:41.296 [2024-11-21 05:06:57.789940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:41.296 [2024-11-21 05:06:57.789952] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:41.296 [2024-11-21 05:06:57.789964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:41.296 [2024-11-21 05:06:57.789973] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:41.296 [2024-11-21 05:06:57.789983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:41.296 [2024-11-21 05:06:57.789992] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:41.296 [2024-11-21 05:06:57.789999] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:41.296 [2024-11-21 05:06:57.790011] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:41.296 [2024-11-21 05:06:57.790023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.790031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:41.296 [2024-11-21 05:06:57.790039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:20:41.296 [2024-11-21 05:06:57.790051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.790141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.296 [2024-11-21 05:06:57.790154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:41.296 [2024-11-21 05:06:57.790162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:41.296 [2024-11-21 05:06:57.790170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.296 [2024-11-21 05:06:57.790280] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:41.296 [2024-11-21 05:06:57.790298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:41.296 [2024-11-21 05:06:57.790313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.296 [2024-11-21 05:06:57.790327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.296 [2024-11-21 05:06:57.790336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:41.297 [2024-11-21 05:06:57.790343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:41.297 [2024-11-21 05:06:57.790369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.297 [2024-11-21 05:06:57.790383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:41.297 [2024-11-21 05:06:57.790390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:41.297 [2024-11-21 05:06:57.790398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.297 [2024-11-21 05:06:57.790405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:41.297 [2024-11-21 05:06:57.790412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:41.297 [2024-11-21 05:06:57.790420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:41.297 [2024-11-21 05:06:57.790433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:41.297 [2024-11-21 05:06:57.790454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:41.297 [2024-11-21 05:06:57.790485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:41.297 [2024-11-21 05:06:57.790507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:41.297 [2024-11-21 05:06:57.790529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:41.297 [2024-11-21 05:06:57.790550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.297 [2024-11-21 05:06:57.790563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:41.297 [2024-11-21 05:06:57.790571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:41.297 [2024-11-21 05:06:57.790577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.297 [2024-11-21 05:06:57.790584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:41.297 [2024-11-21 05:06:57.790591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:41.297 [2024-11-21 05:06:57.790604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:41.297 [2024-11-21 05:06:57.790634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:41.297 [2024-11-21 05:06:57.790642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790650] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:41.297 [2024-11-21 05:06:57.790659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:41.297 [2024-11-21 05:06:57.790668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.297 [2024-11-21 05:06:57.790683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:41.297 [2024-11-21 05:06:57.790690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:41.297 [2024-11-21 05:06:57.790697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:41.297 [2024-11-21 05:06:57.790705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:41.297 [2024-11-21 05:06:57.790711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:41.297 [2024-11-21 05:06:57.790718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:41.297 [2024-11-21 05:06:57.790727] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:41.297 [2024-11-21 05:06:57.790738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.297 [2024-11-21 05:06:57.790760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:41.297 [2024-11-21 05:06:57.790769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:41.297 [2024-11-21 05:06:57.790776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:41.297 [2024-11-21 05:06:57.790784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:41.297 [2024-11-21 05:06:57.790793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:41.297 [2024-11-21 05:06:57.790800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:41.297 [2024-11-21 05:06:57.790807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:41.297 [2024-11-21 05:06:57.790815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:41.297 [2024-11-21 05:06:57.790821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:41.297 [2024-11-21 05:06:57.790829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:41.297 [2024-11-21 05:06:57.790837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:41.297 [2024-11-21 05:06:57.790844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:41.297 [2024-11-21 05:06:57.790853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:41.297 [2024-11-21 05:06:57.790860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:41.297 [2024-11-21 05:06:57.790868] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:41.297 [2024-11-21 05:06:57.790877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.297 [2024-11-21 05:06:57.790892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:41.297 [2024-11-21 05:06:57.790899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:41.297 [2024-11-21 05:06:57.790907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:41.297 [2024-11-21 05:06:57.790915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:41.297 [2024-11-21 05:06:57.790923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.297 [2024-11-21 05:06:57.790931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:41.297 [2024-11-21 05:06:57.790943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:20:41.297 [2024-11-21 05:06:57.790951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.297 [2024-11-21 05:06:57.811752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.297 [2024-11-21 05:06:57.811959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.297 [2024-11-21 05:06:57.811979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.727 ms 00:20:41.297 [2024-11-21 05:06:57.811989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.297 [2024-11-21 05:06:57.812141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.297 [2024-11-21 05:06:57.812160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:41.297 [2024-11-21 05:06:57.812170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:41.297 [2024-11-21 05:06:57.812179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.297 [2024-11-21 05:06:57.841238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.297 [2024-11-21 05:06:57.841324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.297 [2024-11-21 05:06:57.841351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.028 ms 00:20:41.297 [2024-11-21 05:06:57.841370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.297 [2024-11-21 05:06:57.841531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.297 [2024-11-21 05:06:57.841560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.297 [2024-11-21 05:06:57.841580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:41.297 [2024-11-21 05:06:57.841604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.297 [2024-11-21 05:06:57.842427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.297 [2024-11-21 05:06:57.842491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.297 [2024-11-21 05:06:57.842513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:20:41.297 [2024-11-21 05:06:57.842543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.297 [2024-11-21 05:06:57.842839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.297 [2024-11-21 05:06:57.842859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.297 [2024-11-21 05:06:57.842881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:20:41.297 [2024-11-21 05:06:57.842897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.297 [2024-11-21 05:06:57.855232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.855282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.298 [2024-11-21 05:06:57.855302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.292 ms 00:20:41.298 [2024-11-21 05:06:57.855312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.860397] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:41.298 [2024-11-21 05:06:57.860453] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:41.298 [2024-11-21 05:06:57.860467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.860477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:41.298 [2024-11-21 05:06:57.860487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:20:41.298 [2024-11-21 05:06:57.860495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.876976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.877029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:41.298 [2024-11-21 05:06:57.877043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.390 ms 00:20:41.298 [2024-11-21 05:06:57.877051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.880529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.880580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:41.298 [2024-11-21 05:06:57.880590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:20:41.298 [2024-11-21 05:06:57.880598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.883417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.883625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:41.298 [2024-11-21 05:06:57.883647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:20:41.298 [2024-11-21 05:06:57.883655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.884020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.884037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:41.298 [2024-11-21 05:06:57.884053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:41.298 [2024-11-21 05:06:57.884061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.913570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.913661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.298 [2024-11-21 05:06:57.913676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.482 ms 00:20:41.298 [2024-11-21 05:06:57.913686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.922138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:41.298 [2024-11-21 05:06:57.940654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.940689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.298 [2024-11-21 05:06:57.940701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.866 ms 00:20:41.298 [2024-11-21 05:06:57.940709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.940782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.940793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.298 [2024-11-21 05:06:57.940801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:41.298 [2024-11-21 05:06:57.940813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.940864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.940877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.298 [2024-11-21 05:06:57.940886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:41.298 [2024-11-21 05:06:57.940893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.940920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.940929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.298 [2024-11-21 05:06:57.940937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:41.298 [2024-11-21 05:06:57.940945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.940981] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.298 [2024-11-21 05:06:57.940990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.940998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.298 [2024-11-21 05:06:57.941006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:41.298 [2024-11-21 05:06:57.941013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.945589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.945639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.298 [2024-11-21 05:06:57.945657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.554 ms 00:20:41.298 [2024-11-21 05:06:57.945668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.945750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.298 [2024-11-21 05:06:57.945760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.298 [2024-11-21 05:06:57.945773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:41.298 [2024-11-21 05:06:57.945781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.298 [2024-11-21 05:06:57.947211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.298 [2024-11-21 05:06:57.948272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 183.775 ms, result 0 00:20:41.298 [2024-11-21 05:06:57.949249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.298 [2024-11-21 05:06:57.957209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.244  [2024-11-21T05:07:00.366Z] Copying: 14/256 [MB] (14 MBps) [2024-11-21T05:07:01.309Z] Copying: 25/256 [MB] (10 MBps) [2024-11-21T05:07:02.253Z] Copying: 47/256 [MB] (22 MBps) [2024-11-21T05:07:03.199Z] Copying: 59/256 [MB] (11 MBps) [2024-11-21T05:07:04.145Z] Copying: 76/256 [MB] (16 MBps) [2024-11-21T05:07:05.091Z] Copying: 90/256 [MB] (13 MBps) [2024-11-21T05:07:06.037Z] Copying: 105/256 [MB] (14 MBps) [2024-11-21T05:07:06.982Z] Copying: 115/256 [MB] (10 MBps) [2024-11-21T05:07:08.374Z] Copying: 129/256 [MB] (14 MBps) [2024-11-21T05:07:09.319Z] Copying: 145/256 [MB] (16 MBps) [2024-11-21T05:07:10.258Z] Copying: 157/256 [MB] (11 MBps) [2024-11-21T05:07:11.216Z] Copying: 179/256 [MB] (21 MBps) [2024-11-21T05:07:12.160Z] Copying: 193/256 [MB] (14 MBps) [2024-11-21T05:07:13.104Z] Copying: 204/256 [MB] (10 MBps) [2024-11-21T05:07:14.047Z] Copying: 218/256 [MB] (14 MBps) [2024-11-21T05:07:14.987Z] Copying: 235/256 [MB] (16 MBps) [2024-11-21T05:07:15.249Z] Copying: 253/256 [MB] (17 MBps) [2024-11-21T05:07:15.249Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-21 05:07:15.231179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.515 [2024-11-21 05:07:15.233678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.515 [2024-11-21 05:07:15.233734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:58.515 [2024-11-21 05:07:15.233751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:58.515 [2024-11-21 05:07:15.233761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.515 [2024-11-21 05:07:15.233785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:58.515 [2024-11-21 05:07:15.234759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.515 [2024-11-21 05:07:15.234795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:58.515 [2024-11-21 05:07:15.234808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:20:58.515 [2024-11-21 05:07:15.234818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.515 [2024-11-21 05:07:15.235098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.515 [2024-11-21 05:07:15.235255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:58.515 [2024-11-21 05:07:15.235268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:58.515 [2024-11-21 05:07:15.235288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.515 [2024-11-21 05:07:15.239028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.515 [2024-11-21 05:07:15.239054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.515 [2024-11-21 05:07:15.239065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:20:58.515 [2024-11-21 05:07:15.239073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.515 [2024-11-21 05:07:15.246048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.515 [2024-11-21 05:07:15.246248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:58.515 [2024-11-21 05:07:15.246270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.942 ms 00:20:58.515 [2024-11-21 05:07:15.246295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.780 [2024-11-21 05:07:15.249509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.780 [2024-11-21 05:07:15.249707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.781 [2024-11-21 05:07:15.249726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:20:58.781 [2024-11-21 05:07:15.249734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.781 [2024-11-21 05:07:15.255370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.781 [2024-11-21 05:07:15.255436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.781 [2024-11-21 05:07:15.255448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:20:58.781 [2024-11-21 05:07:15.255457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.781 [2024-11-21 05:07:15.255599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.781 [2024-11-21 05:07:15.255629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.781 [2024-11-21 05:07:15.255639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:58.781 [2024-11-21 05:07:15.255658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.781 [2024-11-21 05:07:15.259078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.781 [2024-11-21 05:07:15.259124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.781 [2024-11-21 05:07:15.259135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.400 ms 00:20:58.781 [2024-11-21 05:07:15.259143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.781 [2024-11-21 05:07:15.262130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.781 [2024-11-21 05:07:15.262177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.781 [2024-11-21 05:07:15.262188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.940 ms 00:20:58.781 [2024-11-21 05:07:15.262196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.781 [2024-11-21 05:07:15.264433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.781 [2024-11-21 05:07:15.264479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.781 [2024-11-21 05:07:15.264489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:20:58.781 [2024-11-21 05:07:15.264497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.781 [2024-11-21 05:07:15.266945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.781 [2024-11-21 05:07:15.266992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.781 [2024-11-21 05:07:15.267002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:20:58.781 [2024-11-21 05:07:15.267009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.781 [2024-11-21 05:07:15.267052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.781 [2024-11-21 05:07:15.267070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.781 [2024-11-21 05:07:15.267530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.782 [2024-11-21 05:07:15.267917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.782 [2024-11-21 05:07:15.267926] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:20:58.782 [2024-11-21 05:07:15.267935] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:58.782 [2024-11-21 05:07:15.267944] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:58.782 [2024-11-21 05:07:15.267952] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:58.782 [2024-11-21 05:07:15.267960] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:58.782 [2024-11-21 05:07:15.267968] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.782 [2024-11-21 05:07:15.267976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.782 [2024-11-21 05:07:15.267984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.782 [2024-11-21 05:07:15.267990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.782 [2024-11-21 05:07:15.267998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.782 [2024-11-21 05:07:15.268007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.782 [2024-11-21 05:07:15.268019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.782 [2024-11-21 05:07:15.268028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:20:58.782 [2024-11-21 05:07:15.268043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.271501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.782 [2024-11-21 05:07:15.271685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.782 [2024-11-21 05:07:15.271838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.438 ms 00:20:58.782 [2024-11-21 05:07:15.271876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.272061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.782 [2024-11-21 05:07:15.272158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.782 [2024-11-21 05:07:15.272185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:58.782 [2024-11-21 05:07:15.272206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.283314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.283487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.782 [2024-11-21 05:07:15.283544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.283568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.283726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.283755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.782 [2024-11-21 05:07:15.283777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.283797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.283866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.283891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.782 [2024-11-21 05:07:15.283981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.284007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.284055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.284079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.782 [2024-11-21 05:07:15.284102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.284123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.304334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.304567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.782 [2024-11-21 05:07:15.304681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.304709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.320513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.320779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.782 [2024-11-21 05:07:15.321036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.321050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.321114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.321152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.782 [2024-11-21 05:07:15.321163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.321172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.321209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.321226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.782 [2024-11-21 05:07:15.321236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.321245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.321343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.321357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.782 [2024-11-21 05:07:15.321367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.782 [2024-11-21 05:07:15.321376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.782 [2024-11-21 05:07:15.321416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.782 [2024-11-21 05:07:15.321429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:58.783 [2024-11-21 05:07:15.321441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.783 [2024-11-21 05:07:15.321450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.783 [2024-11-21 05:07:15.321505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.783 [2024-11-21 05:07:15.321518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.783 [2024-11-21 05:07:15.321533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.783 [2024-11-21 05:07:15.321543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.783 [2024-11-21 05:07:15.321628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.783 [2024-11-21 05:07:15.321643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.783 [2024-11-21 05:07:15.321657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.783 [2024-11-21 05:07:15.321668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.783 [2024-11-21 05:07:15.321862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.146 ms, result 0 00:20:59.045 00:20:59.045 00:20:59.045 05:07:15 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:59.045 05:07:15 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:59.617 05:07:16 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:59.617 [2024-11-21 05:07:16.275940] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:20:59.617 [2024-11-21 05:07:16.276116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88316 ] 00:20:59.878 [2024-11-21 05:07:16.443031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.878 [2024-11-21 05:07:16.483880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.141 [2024-11-21 05:07:16.634165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.141 [2024-11-21 05:07:16.634257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.141 [2024-11-21 05:07:16.799268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.141 [2024-11-21 05:07:16.799341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.141 [2024-11-21 05:07:16.799367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:00.141 [2024-11-21 05:07:16.799381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.141 [2024-11-21 05:07:16.802331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.141 [2024-11-21 05:07:16.802541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.141 [2024-11-21 05:07:16.802564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:21:00.141 [2024-11-21 05:07:16.802573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.141 [2024-11-21 05:07:16.802829] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.141 [2024-11-21 05:07:16.803167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.141 [2024-11-21 05:07:16.803200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.141 [2024-11-21 05:07:16.803213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.141 [2024-11-21 05:07:16.803228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:21:00.141 [2024-11-21 05:07:16.803237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.141 [2024-11-21 05:07:16.805630] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:00.141 [2024-11-21 05:07:16.810464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.141 [2024-11-21 05:07:16.810518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:00.141 [2024-11-21 05:07:16.810537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.860 ms 00:21:00.141 [2024-11-21 05:07:16.810546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.141 [2024-11-21 05:07:16.810666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.141 [2024-11-21 05:07:16.810679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:00.141 [2024-11-21 05:07:16.810688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:00.141 [2024-11-21 05:07:16.810704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.141 [2024-11-21 05:07:16.822238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.141 [2024-11-21 05:07:16.822282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.141 [2024-11-21 05:07:16.822294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.483 ms 00:21:00.141 [2024-11-21 05:07:16.822303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.141 [2024-11-21 05:07:16.822458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.141 [2024-11-21 05:07:16.822472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.141 [2024-11-21 05:07:16.822485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:00.141 [2024-11-21 05:07:16.822493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.141 [2024-11-21 05:07:16.822530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.142 [2024-11-21 05:07:16.822540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.142 [2024-11-21 05:07:16.822555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:00.142 [2024-11-21 05:07:16.822564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.142 [2024-11-21 05:07:16.822588] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:00.142 [2024-11-21 05:07:16.825455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.142 [2024-11-21 05:07:16.825650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.142 [2024-11-21 05:07:16.825678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:21:00.142 [2024-11-21 05:07:16.825693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.142 [2024-11-21 05:07:16.825752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.142 [2024-11-21 05:07:16.825764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.142 [2024-11-21 05:07:16.825774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:00.142 [2024-11-21 05:07:16.825784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.142 [2024-11-21 05:07:16.825805] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:00.142 [2024-11-21 05:07:16.825832] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:00.142 [2024-11-21 05:07:16.825873] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:00.142 [2024-11-21 05:07:16.825894] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:00.142 [2024-11-21 05:07:16.826008] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.142 [2024-11-21 05:07:16.826026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.142 [2024-11-21 05:07:16.826043] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:00.142 [2024-11-21 05:07:16.826056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826067] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826076] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:00.142 [2024-11-21 05:07:16.826090] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.142 [2024-11-21 05:07:16.826098] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.142 [2024-11-21 05:07:16.826109] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.142 [2024-11-21 05:07:16.826122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.142 [2024-11-21 05:07:16.826129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.142 [2024-11-21 05:07:16.826145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:21:00.142 [2024-11-21 05:07:16.826153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.142 [2024-11-21 05:07:16.826245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.142 [2024-11-21 05:07:16.826255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.142 [2024-11-21 05:07:16.826263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:00.142 [2024-11-21 05:07:16.826271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.142 [2024-11-21 05:07:16.826372] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.142 [2024-11-21 05:07:16.826384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.142 [2024-11-21 05:07:16.826398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.142 [2024-11-21 05:07:16.826437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.142 [2024-11-21 05:07:16.826462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.142 [2024-11-21 05:07:16.826477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.142 [2024-11-21 05:07:16.826486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:00.142 [2024-11-21 05:07:16.826494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.142 [2024-11-21 05:07:16.826501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.142 [2024-11-21 05:07:16.826508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:00.142 [2024-11-21 05:07:16.826515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.142 [2024-11-21 05:07:16.826531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.142 [2024-11-21 05:07:16.826555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.142 [2024-11-21 05:07:16.826582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.142 [2024-11-21 05:07:16.826604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.142 [2024-11-21 05:07:16.826649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.142 [2024-11-21 05:07:16.826672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.142 [2024-11-21 05:07:16.826687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.142 [2024-11-21 05:07:16.826694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:00.142 [2024-11-21 05:07:16.826701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.142 [2024-11-21 05:07:16.826709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.142 [2024-11-21 05:07:16.826716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:00.142 [2024-11-21 05:07:16.826725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.142 [2024-11-21 05:07:16.826741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:00.142 [2024-11-21 05:07:16.826748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826756] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.142 [2024-11-21 05:07:16.826765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.142 [2024-11-21 05:07:16.826773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.142 [2024-11-21 05:07:16.826792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.142 [2024-11-21 05:07:16.826799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.142 [2024-11-21 05:07:16.826808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.142 [2024-11-21 05:07:16.826815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.142 [2024-11-21 05:07:16.826822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.142 [2024-11-21 05:07:16.826829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.142 [2024-11-21 05:07:16.826839] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.142 [2024-11-21 05:07:16.826851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.142 [2024-11-21 05:07:16.826862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:00.142 [2024-11-21 05:07:16.826870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:00.142 [2024-11-21 05:07:16.826877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:00.142 [2024-11-21 05:07:16.826884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:00.142 [2024-11-21 05:07:16.826892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:00.142 [2024-11-21 05:07:16.826899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:00.142 [2024-11-21 05:07:16.826907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:00.142 [2024-11-21 05:07:16.826914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:00.142 [2024-11-21 05:07:16.826922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:00.142 [2024-11-21 05:07:16.826930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:00.142 [2024-11-21 05:07:16.826937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:00.143 [2024-11-21 05:07:16.826945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:00.143 [2024-11-21 05:07:16.826953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:00.143 [2024-11-21 05:07:16.826961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:00.143 [2024-11-21 05:07:16.826969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.143 [2024-11-21 05:07:16.826978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.143 [2024-11-21 05:07:16.826992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.143 [2024-11-21 05:07:16.826999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.143 [2024-11-21 05:07:16.827007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.143 [2024-11-21 05:07:16.827015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.143 [2024-11-21 05:07:16.827023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.143 [2024-11-21 05:07:16.827031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.143 [2024-11-21 05:07:16.827039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:21:00.143 [2024-11-21 05:07:16.827047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.143 [2024-11-21 05:07:16.847875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.143 [2024-11-21 05:07:16.848042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.143 [2024-11-21 05:07:16.848102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.754 ms 00:21:00.143 [2024-11-21 05:07:16.848127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.143 [2024-11-21 05:07:16.848284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.143 [2024-11-21 05:07:16.848326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:00.143 [2024-11-21 05:07:16.848353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:00.143 [2024-11-21 05:07:16.848410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.404 [2024-11-21 05:07:16.873523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.404 [2024-11-21 05:07:16.873805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.404 [2024-11-21 05:07:16.874301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.062 ms 00:21:00.404 [2024-11-21 05:07:16.874376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.404 [2024-11-21 05:07:16.874605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.404 [2024-11-21 05:07:16.874743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.404 [2024-11-21 05:07:16.874816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:00.404 [2024-11-21 05:07:16.874854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.404 [2024-11-21 05:07:16.875664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.404 [2024-11-21 05:07:16.875823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.404 [2024-11-21 05:07:16.875904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:21:00.404 [2024-11-21 05:07:16.875943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.404 [2024-11-21 05:07:16.876210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.404 [2024-11-21 05:07:16.876250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.404 [2024-11-21 05:07:16.876748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:21:00.404 [2024-11-21 05:07:16.876825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.404 [2024-11-21 05:07:16.889357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.404 [2024-11-21 05:07:16.889531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.404 [2024-11-21 05:07:16.889600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.425 ms 00:21:00.404 [2024-11-21 05:07:16.889649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.404 [2024-11-21 05:07:16.894469] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:00.405 [2024-11-21 05:07:16.894670] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:00.405 [2024-11-21 05:07:16.894741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.894765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:00.405 [2024-11-21 05:07:16.894786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.946 ms 00:21:00.405 [2024-11-21 05:07:16.894806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.910975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.911141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:00.405 [2024-11-21 05:07:16.911215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.072 ms 00:21:00.405 [2024-11-21 05:07:16.911245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.914317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.914474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:00.405 [2024-11-21 05:07:16.914530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:21:00.405 [2024-11-21 05:07:16.914553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.917329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.917482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:00.405 [2024-11-21 05:07:16.917661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:21:00.405 [2024-11-21 05:07:16.917686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.918135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.918191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:00.405 [2024-11-21 05:07:16.918286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:00.405 [2024-11-21 05:07:16.918310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.948220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.948434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:00.405 [2024-11-21 05:07:16.948493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.855 ms 00:21:00.405 [2024-11-21 05:07:16.948518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.956918] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:00.405 [2024-11-21 05:07:16.981900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.982081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:00.405 [2024-11-21 05:07:16.982140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.262 ms 00:21:00.405 [2024-11-21 05:07:16.982165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.982285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.982315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:00.405 [2024-11-21 05:07:16.982328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:00.405 [2024-11-21 05:07:16.982341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.982415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.982428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:00.405 [2024-11-21 05:07:16.982438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:00.405 [2024-11-21 05:07:16.982447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.982475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.982487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:00.405 [2024-11-21 05:07:16.982497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:00.405 [2024-11-21 05:07:16.982505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.982549] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:00.405 [2024-11-21 05:07:16.982560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.982571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:00.405 [2024-11-21 05:07:16.982580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:00.405 [2024-11-21 05:07:16.982589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.989744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.989799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:00.405 [2024-11-21 05:07:16.989810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.096 ms 00:21:00.405 [2024-11-21 05:07:16.989820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.989935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.405 [2024-11-21 05:07:16.989947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:00.405 [2024-11-21 05:07:16.989958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:00.405 [2024-11-21 05:07:16.989968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.405 [2024-11-21 05:07:16.991290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.405 [2024-11-21 05:07:16.992799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 191.661 ms, result 0 00:21:00.405 [2024-11-21 05:07:16.994051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:00.405 [2024-11-21 05:07:17.001595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.978  [2024-11-21T05:07:17.712Z] Copying: 4096/4096 [kB] (average 9683 kBps)[2024-11-21 05:07:17.425437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:00.978 [2024-11-21 05:07:17.427108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.978 [2024-11-21 05:07:17.427173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:00.978 [2024-11-21 05:07:17.427193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:00.978 [2024-11-21 05:07:17.427202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.978 [2024-11-21 05:07:17.427226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:00.978 [2024-11-21 05:07:17.428200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.978 [2024-11-21 05:07:17.428249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:00.978 [2024-11-21 05:07:17.428262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:21:00.978 [2024-11-21 05:07:17.428274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.978 [2024-11-21 05:07:17.431414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.978 [2024-11-21 05:07:17.431471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:00.978 [2024-11-21 05:07:17.431484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:21:00.979 [2024-11-21 05:07:17.431501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.435980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.436023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.979 [2024-11-21 05:07:17.436035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.461 ms 00:21:00.979 [2024-11-21 05:07:17.436044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.443398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.443665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.979 [2024-11-21 05:07:17.443689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.306 ms 00:21:00.979 [2024-11-21 05:07:17.443716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.446826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.447028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.979 [2024-11-21 05:07:17.447049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:21:00.979 [2024-11-21 05:07:17.447057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.452775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.452844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.979 [2024-11-21 05:07:17.452855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.568 ms 00:21:00.979 [2024-11-21 05:07:17.452864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.453009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.453021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.979 [2024-11-21 05:07:17.453030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:00.979 [2024-11-21 05:07:17.453045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.456638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.456686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.979 [2024-11-21 05:07:17.456698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.573 ms 00:21:00.979 [2024-11-21 05:07:17.456705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.459712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.459765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.979 [2024-11-21 05:07:17.459777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:21:00.979 [2024-11-21 05:07:17.459784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.462123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.462340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.979 [2024-11-21 05:07:17.462359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.287 ms 00:21:00.979 [2024-11-21 05:07:17.462368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.464797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.979 [2024-11-21 05:07:17.464845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.979 [2024-11-21 05:07:17.464855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:21:00.979 [2024-11-21 05:07:17.464863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.979 [2024-11-21 05:07:17.464910] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.979 [2024-11-21 05:07:17.464928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.464998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.979 [2024-11-21 05:07:17.465277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.980 [2024-11-21 05:07:17.465790] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.980 [2024-11-21 05:07:17.465799] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:21:00.980 [2024-11-21 05:07:17.465808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.980 [2024-11-21 05:07:17.465817] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.980 [2024-11-21 05:07:17.465825] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.980 [2024-11-21 05:07:17.465833] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.980 [2024-11-21 05:07:17.465841] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.980 [2024-11-21 05:07:17.465849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.980 [2024-11-21 05:07:17.465862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.980 [2024-11-21 05:07:17.465869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.980 [2024-11-21 05:07:17.465876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.980 [2024-11-21 05:07:17.465884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.980 [2024-11-21 05:07:17.465892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.980 [2024-11-21 05:07:17.465901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:21:00.980 [2024-11-21 05:07:17.465911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.980 [2024-11-21 05:07:17.469073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.980 [2024-11-21 05:07:17.469279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.980 [2024-11-21 05:07:17.469299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:21:00.980 [2024-11-21 05:07:17.469320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.980 [2024-11-21 05:07:17.469488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.981 [2024-11-21 05:07:17.469501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.981 [2024-11-21 05:07:17.469511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:00.981 [2024-11-21 05:07:17.469518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.480554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.480755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.981 [2024-11-21 05:07:17.480775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.480791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.480886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.480899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.981 [2024-11-21 05:07:17.480910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.480919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.480972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.480983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.981 [2024-11-21 05:07:17.480992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.481001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.481028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.481043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.981 [2024-11-21 05:07:17.481052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.481061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.499945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.500152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.981 [2024-11-21 05:07:17.500182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.500192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.514880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.515088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.981 [2024-11-21 05:07:17.515109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.515119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.515183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.515196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.981 [2024-11-21 05:07:17.515206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.515215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.515249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.515268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.981 [2024-11-21 05:07:17.515278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.515287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.515373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.515391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.981 [2024-11-21 05:07:17.515402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.515410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.515450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.515463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.981 [2024-11-21 05:07:17.515475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.515484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.515537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.515548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.981 [2024-11-21 05:07:17.515558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.515568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.515649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.981 [2024-11-21 05:07:17.515665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.981 [2024-11-21 05:07:17.515676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.981 [2024-11-21 05:07:17.515686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.981 [2024-11-21 05:07:17.515866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.724 ms, result 0 00:21:01.242 00:21:01.242 00:21:01.242 05:07:17 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=88331 00:21:01.242 05:07:17 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 88331 00:21:01.242 05:07:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88331 ']' 00:21:01.242 05:07:17 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:01.242 05:07:17 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.242 05:07:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.242 05:07:17 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.242 05:07:17 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.242 05:07:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:01.242 [2024-11-21 05:07:17.896438] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:21:01.242 [2024-11-21 05:07:17.896855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88331 ] 00:21:01.504 [2024-11-21 05:07:18.063773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.504 [2024-11-21 05:07:18.106111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.077 05:07:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.077 05:07:18 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:02.077 05:07:18 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:02.339 [2024-11-21 05:07:18.962685] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.339 [2024-11-21 05:07:18.962775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.602 [2024-11-21 05:07:19.119936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.602 [2024-11-21 05:07:19.120004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:02.602 [2024-11-21 05:07:19.120026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:02.602 [2024-11-21 05:07:19.120038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.602 [2024-11-21 05:07:19.122943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.602 [2024-11-21 05:07:19.123004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.602 [2024-11-21 05:07:19.123016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:21:02.602 [2024-11-21 05:07:19.123026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.602 [2024-11-21 05:07:19.123171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:02.602 [2024-11-21 05:07:19.123481] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:02.603 [2024-11-21 05:07:19.123500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.123511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.603 [2024-11-21 05:07:19.123523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:21:02.603 [2024-11-21 05:07:19.123535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.126172] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:02.603 [2024-11-21 05:07:19.131124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.131180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:02.603 [2024-11-21 05:07:19.131195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.949 ms 00:21:02.603 [2024-11-21 05:07:19.131204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.131300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.131316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:02.603 [2024-11-21 05:07:19.131332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:02.603 [2024-11-21 05:07:19.131341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.143269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.143318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.603 [2024-11-21 05:07:19.143333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.860 ms 00:21:02.603 [2024-11-21 05:07:19.143343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.143482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.143495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.603 [2024-11-21 05:07:19.143507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:02.603 [2024-11-21 05:07:19.143520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.143549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.143558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.603 [2024-11-21 05:07:19.143572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:02.603 [2024-11-21 05:07:19.143580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.143650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:02.603 [2024-11-21 05:07:19.146412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.146462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.603 [2024-11-21 05:07:19.146473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:21:02.603 [2024-11-21 05:07:19.146487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.146539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.146552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.603 [2024-11-21 05:07:19.146561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:02.603 [2024-11-21 05:07:19.146578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.146599] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:02.603 [2024-11-21 05:07:19.146642] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:02.603 [2024-11-21 05:07:19.146690] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:02.603 [2024-11-21 05:07:19.146717] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:02.603 [2024-11-21 05:07:19.146830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:02.603 [2024-11-21 05:07:19.146849] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.603 [2024-11-21 05:07:19.146861] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:02.603 [2024-11-21 05:07:19.146875] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.603 [2024-11-21 05:07:19.146884] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.603 [2024-11-21 05:07:19.146900] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:02.603 [2024-11-21 05:07:19.146908] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.603 [2024-11-21 05:07:19.146918] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:02.603 [2024-11-21 05:07:19.146929] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:02.603 [2024-11-21 05:07:19.146942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.146951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.603 [2024-11-21 05:07:19.146962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:21:02.603 [2024-11-21 05:07:19.146970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.147060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.603 [2024-11-21 05:07:19.147070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.603 [2024-11-21 05:07:19.147082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:02.603 [2024-11-21 05:07:19.147090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.603 [2024-11-21 05:07:19.147200] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.603 [2024-11-21 05:07:19.147213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.603 [2024-11-21 05:07:19.147226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.603 [2024-11-21 05:07:19.147257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.603 [2024-11-21 05:07:19.147298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.603 [2024-11-21 05:07:19.147317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.603 [2024-11-21 05:07:19.147327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:02.603 [2024-11-21 05:07:19.147337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.603 [2024-11-21 05:07:19.147345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.603 [2024-11-21 05:07:19.147356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:02.603 [2024-11-21 05:07:19.147363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.603 [2024-11-21 05:07:19.147386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.603 [2024-11-21 05:07:19.147418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.603 [2024-11-21 05:07:19.147445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.603 [2024-11-21 05:07:19.147473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.603 [2024-11-21 05:07:19.147497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.603 [2024-11-21 05:07:19.147524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.603 [2024-11-21 05:07:19.147539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.603 [2024-11-21 05:07:19.147546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:02.603 [2024-11-21 05:07:19.147557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.603 [2024-11-21 05:07:19.147565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:02.603 [2024-11-21 05:07:19.147573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:02.603 [2024-11-21 05:07:19.147580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:02.603 [2024-11-21 05:07:19.147596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:02.603 [2024-11-21 05:07:19.147622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147629] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.603 [2024-11-21 05:07:19.147639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.603 [2024-11-21 05:07:19.147648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.603 [2024-11-21 05:07:19.147658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.603 [2024-11-21 05:07:19.147665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.603 [2024-11-21 05:07:19.147675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.603 [2024-11-21 05:07:19.147682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.604 [2024-11-21 05:07:19.147694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.604 [2024-11-21 05:07:19.147701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.604 [2024-11-21 05:07:19.147713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.604 [2024-11-21 05:07:19.147722] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.604 [2024-11-21 05:07:19.147735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.604 [2024-11-21 05:07:19.147744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:02.604 [2024-11-21 05:07:19.147754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:02.604 [2024-11-21 05:07:19.147761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:02.604 [2024-11-21 05:07:19.147773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:02.604 [2024-11-21 05:07:19.147781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:02.604 [2024-11-21 05:07:19.147789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:02.604 [2024-11-21 05:07:19.147797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:02.604 [2024-11-21 05:07:19.147807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:02.604 [2024-11-21 05:07:19.147815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:02.604 [2024-11-21 05:07:19.147826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:02.604 [2024-11-21 05:07:19.147833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:02.604 [2024-11-21 05:07:19.147843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:02.604 [2024-11-21 05:07:19.147850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:02.604 [2024-11-21 05:07:19.147861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:02.604 [2024-11-21 05:07:19.147870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.604 [2024-11-21 05:07:19.147882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.604 [2024-11-21 05:07:19.147895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.604 [2024-11-21 05:07:19.147905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.604 [2024-11-21 05:07:19.147913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.604 [2024-11-21 05:07:19.147922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.604 [2024-11-21 05:07:19.147930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.147942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.604 [2024-11-21 05:07:19.147951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:21:02.604 [2024-11-21 05:07:19.147961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.169350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.169578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.604 [2024-11-21 05:07:19.169877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.303 ms 00:21:02.604 [2024-11-21 05:07:19.169932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.170118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.170304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.604 [2024-11-21 05:07:19.170331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:02.604 [2024-11-21 05:07:19.170353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.188137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.188350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.604 [2024-11-21 05:07:19.188557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.745 ms 00:21:02.604 [2024-11-21 05:07:19.188631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.188804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.188839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.604 [2024-11-21 05:07:19.188863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:02.604 [2024-11-21 05:07:19.188965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.189780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.189999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.604 [2024-11-21 05:07:19.190080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:21:02.604 [2024-11-21 05:07:19.190120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.190329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.190366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.604 [2024-11-21 05:07:19.190394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:21:02.604 [2024-11-21 05:07:19.190417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.202470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.202676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.604 [2024-11-21 05:07:19.202877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.016 ms 00:21:02.604 [2024-11-21 05:07:19.202932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.208098] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:02.604 [2024-11-21 05:07:19.208296] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:02.604 [2024-11-21 05:07:19.208368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.208395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:02.604 [2024-11-21 05:07:19.208416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.285 ms 00:21:02.604 [2024-11-21 05:07:19.208437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.225346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.225534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:02.604 [2024-11-21 05:07:19.225605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:21:02.604 [2024-11-21 05:07:19.225657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.229158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.229336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:02.604 [2024-11-21 05:07:19.229396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.355 ms 00:21:02.604 [2024-11-21 05:07:19.229422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.232484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.232685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:02.604 [2024-11-21 05:07:19.232752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.999 ms 00:21:02.604 [2024-11-21 05:07:19.232780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.233205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.233297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.604 [2024-11-21 05:07:19.233425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:21:02.604 [2024-11-21 05:07:19.233454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.271975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.272049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:02.604 [2024-11-21 05:07:19.272066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.473 ms 00:21:02.604 [2024-11-21 05:07:19.272082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.280650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:02.604 [2024-11-21 05:07:19.305969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.306025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.604 [2024-11-21 05:07:19.306043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.770 ms 00:21:02.604 [2024-11-21 05:07:19.306054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.306161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.306173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:02.604 [2024-11-21 05:07:19.306191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:02.604 [2024-11-21 05:07:19.306200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.306278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.306292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.604 [2024-11-21 05:07:19.306303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:02.604 [2024-11-21 05:07:19.306312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.306344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.306355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.604 [2024-11-21 05:07:19.306370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:02.604 [2024-11-21 05:07:19.306381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.604 [2024-11-21 05:07:19.306430] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:02.604 [2024-11-21 05:07:19.306443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.604 [2024-11-21 05:07:19.306455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:02.605 [2024-11-21 05:07:19.306465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:02.605 [2024-11-21 05:07:19.306474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.605 [2024-11-21 05:07:19.314263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.605 [2024-11-21 05:07:19.314542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.605 [2024-11-21 05:07:19.314564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.763 ms 00:21:02.605 [2024-11-21 05:07:19.314582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.605 [2024-11-21 05:07:19.315080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.605 [2024-11-21 05:07:19.315144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.605 [2024-11-21 05:07:19.315159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:02.605 [2024-11-21 05:07:19.315172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.605 [2024-11-21 05:07:19.316500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:02.605 [2024-11-21 05:07:19.318231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 196.193 ms, result 0 00:21:02.605 [2024-11-21 05:07:19.320950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:02.605 Some configs were skipped because the RPC state that can call them passed over. 00:21:02.866 05:07:19 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:02.866 [2024-11-21 05:07:19.558242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.866 [2024-11-21 05:07:19.558311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:02.866 [2024-11-21 05:07:19.558333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.312 ms 00:21:02.866 [2024-11-21 05:07:19.558343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.866 [2024-11-21 05:07:19.558384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.468 ms, result 0 00:21:02.866 true 00:21:02.866 05:07:19 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:03.127 [2024-11-21 05:07:19.777733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.127 [2024-11-21 05:07:19.777936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:03.127 [2024-11-21 05:07:19.777959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.543 ms 00:21:03.127 [2024-11-21 05:07:19.777972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.127 [2024-11-21 05:07:19.778018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.828 ms, result 0 00:21:03.127 true 00:21:03.127 05:07:19 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 88331 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88331 ']' 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88331 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88331 00:21:03.127 killing process with pid 88331 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88331' 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88331 00:21:03.127 05:07:19 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88331 00:21:03.390 [2024-11-21 05:07:20.024094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.390 [2024-11-21 05:07:20.024168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:03.390 [2024-11-21 05:07:20.024187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:03.390 [2024-11-21 05:07:20.024196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.390 [2024-11-21 05:07:20.024228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:03.390 [2024-11-21 05:07:20.025086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.390 [2024-11-21 05:07:20.025120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:03.390 [2024-11-21 05:07:20.025181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:21:03.390 [2024-11-21 05:07:20.025198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.025604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.025643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:03.391 [2024-11-21 05:07:20.025653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:21:03.391 [2024-11-21 05:07:20.025670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.030640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.030821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:03.391 [2024-11-21 05:07:20.030842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.947 ms 00:21:03.391 [2024-11-21 05:07:20.030858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.038176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.038336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:03.391 [2024-11-21 05:07:20.038354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.184 ms 00:21:03.391 [2024-11-21 05:07:20.038368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.041316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.041394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:03.391 [2024-11-21 05:07:20.041409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:21:03.391 [2024-11-21 05:07:20.041423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.046442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.046595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:03.391 [2024-11-21 05:07:20.046673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.958 ms 00:21:03.391 [2024-11-21 05:07:20.046704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.047208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.047386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:03.391 [2024-11-21 05:07:20.047406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:03.391 [2024-11-21 05:07:20.047418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.050560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.050643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:03.391 [2024-11-21 05:07:20.050655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:21:03.391 [2024-11-21 05:07:20.050685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.053377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.053572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:03.391 [2024-11-21 05:07:20.053595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:21:03.391 [2024-11-21 05:07:20.053626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.055958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.056013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:03.391 [2024-11-21 05:07:20.056024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:21:03.391 [2024-11-21 05:07:20.056034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.058005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.391 [2024-11-21 05:07:20.058059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:03.391 [2024-11-21 05:07:20.058068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:21:03.391 [2024-11-21 05:07:20.058078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.391 [2024-11-21 05:07:20.058124] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:03.391 [2024-11-21 05:07:20.058145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:03.391 [2024-11-21 05:07:20.058687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.058992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:03.392 [2024-11-21 05:07:20.059149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:03.392 [2024-11-21 05:07:20.059158] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:21:03.392 [2024-11-21 05:07:20.059169] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:03.392 [2024-11-21 05:07:20.059181] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:03.392 [2024-11-21 05:07:20.059191] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:03.392 [2024-11-21 05:07:20.059200] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:03.392 [2024-11-21 05:07:20.059210] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:03.392 [2024-11-21 05:07:20.059223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:03.392 [2024-11-21 05:07:20.059232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:03.392 [2024-11-21 05:07:20.059239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:03.392 [2024-11-21 05:07:20.059249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:03.392 [2024-11-21 05:07:20.059258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.392 [2024-11-21 05:07:20.059268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:03.392 [2024-11-21 05:07:20.059280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:21:03.392 [2024-11-21 05:07:20.059293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.062392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.392 [2024-11-21 05:07:20.062428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:03.392 [2024-11-21 05:07:20.062445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:21:03.392 [2024-11-21 05:07:20.062457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.062657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.392 [2024-11-21 05:07:20.062675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:03.392 [2024-11-21 05:07:20.062686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:21:03.392 [2024-11-21 05:07:20.062696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.072993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.073050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.392 [2024-11-21 05:07:20.073062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.073074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.073220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.073242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.392 [2024-11-21 05:07:20.073256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.073277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.073357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.073377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.392 [2024-11-21 05:07:20.073392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.073409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.073440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.073458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.392 [2024-11-21 05:07:20.073472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.073485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.093061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.093130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.392 [2024-11-21 05:07:20.093156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.093172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.107851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.107923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.392 [2024-11-21 05:07:20.107937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.107956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.108038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.108054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.392 [2024-11-21 05:07:20.108064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.108080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.108118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.108131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.392 [2024-11-21 05:07:20.108140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.392 [2024-11-21 05:07:20.108150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.392 [2024-11-21 05:07:20.108240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.392 [2024-11-21 05:07:20.108253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.393 [2024-11-21 05:07:20.108267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.393 [2024-11-21 05:07:20.108278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.393 [2024-11-21 05:07:20.108315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.393 [2024-11-21 05:07:20.108328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:03.393 [2024-11-21 05:07:20.108339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.393 [2024-11-21 05:07:20.108352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.393 [2024-11-21 05:07:20.108409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.393 [2024-11-21 05:07:20.108423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.393 [2024-11-21 05:07:20.108438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.393 [2024-11-21 05:07:20.108448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.393 [2024-11-21 05:07:20.108514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.393 [2024-11-21 05:07:20.108531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.393 [2024-11-21 05:07:20.108541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.393 [2024-11-21 05:07:20.108554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.393 [2024-11-21 05:07:20.108786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.641 ms, result 0 00:21:03.965 05:07:20 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:03.965 [2024-11-21 05:07:20.491380] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:21:03.965 [2024-11-21 05:07:20.491553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88373 ] 00:21:03.965 [2024-11-21 05:07:20.655602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.226 [2024-11-21 05:07:20.698447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.226 [2024-11-21 05:07:20.853010] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.226 [2024-11-21 05:07:20.853104] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:04.489 [2024-11-21 05:07:21.017076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.489 [2024-11-21 05:07:21.017403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:04.489 [2024-11-21 05:07:21.017443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:04.489 [2024-11-21 05:07:21.017469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.489 [2024-11-21 05:07:21.020274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.489 [2024-11-21 05:07:21.020331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.489 [2024-11-21 05:07:21.020344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:21:04.489 [2024-11-21 05:07:21.020352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.489 [2024-11-21 05:07:21.020485] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:04.489 [2024-11-21 05:07:21.021013] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:04.489 [2024-11-21 05:07:21.021096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.489 [2024-11-21 05:07:21.021122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.489 [2024-11-21 05:07:21.021277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:21:04.489 [2024-11-21 05:07:21.021318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.489 [2024-11-21 05:07:21.023983] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:04.489 [2024-11-21 05:07:21.029037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.489 [2024-11-21 05:07:21.029103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:04.489 [2024-11-21 05:07:21.029120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:21:04.489 [2024-11-21 05:07:21.029130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.489 [2024-11-21 05:07:21.029265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.489 [2024-11-21 05:07:21.029289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:04.489 [2024-11-21 05:07:21.029304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:04.489 [2024-11-21 05:07:21.029318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.489 [2024-11-21 05:07:21.041321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.489 [2024-11-21 05:07:21.041378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.489 [2024-11-21 05:07:21.041401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.936 ms 00:21:04.489 [2024-11-21 05:07:21.041415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.489 [2024-11-21 05:07:21.041602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.489 [2024-11-21 05:07:21.041655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.489 [2024-11-21 05:07:21.041666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:04.490 [2024-11-21 05:07:21.041677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.490 [2024-11-21 05:07:21.041713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.490 [2024-11-21 05:07:21.041724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:04.490 [2024-11-21 05:07:21.041733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:04.490 [2024-11-21 05:07:21.041741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.490 [2024-11-21 05:07:21.041767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:04.490 [2024-11-21 05:07:21.044497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.490 [2024-11-21 05:07:21.044538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.490 [2024-11-21 05:07:21.044549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:21:04.490 [2024-11-21 05:07:21.044559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.490 [2024-11-21 05:07:21.044640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.490 [2024-11-21 05:07:21.044651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:04.490 [2024-11-21 05:07:21.044661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:04.490 [2024-11-21 05:07:21.044669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.490 [2024-11-21 05:07:21.044701] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:04.490 [2024-11-21 05:07:21.044729] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:04.490 [2024-11-21 05:07:21.044770] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:04.490 [2024-11-21 05:07:21.044794] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:04.490 [2024-11-21 05:07:21.044912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:04.490 [2024-11-21 05:07:21.044926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:04.490 [2024-11-21 05:07:21.044938] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:04.490 [2024-11-21 05:07:21.044955] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:04.490 [2024-11-21 05:07:21.044964] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:04.490 [2024-11-21 05:07:21.044974] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:04.490 [2024-11-21 05:07:21.044982] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:04.490 [2024-11-21 05:07:21.044991] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:04.490 [2024-11-21 05:07:21.045003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:04.490 [2024-11-21 05:07:21.045014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.490 [2024-11-21 05:07:21.045023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:04.490 [2024-11-21 05:07:21.045033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:21:04.490 [2024-11-21 05:07:21.045041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.490 [2024-11-21 05:07:21.045130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.490 [2024-11-21 05:07:21.045173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:04.490 [2024-11-21 05:07:21.045189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:04.490 [2024-11-21 05:07:21.045203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.490 [2024-11-21 05:07:21.045344] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:04.490 [2024-11-21 05:07:21.045364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:04.490 [2024-11-21 05:07:21.045389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:04.490 [2024-11-21 05:07:21.045433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:04.490 [2024-11-21 05:07:21.045464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.490 [2024-11-21 05:07:21.045484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:04.490 [2024-11-21 05:07:21.045492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:04.490 [2024-11-21 05:07:21.045501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:04.490 [2024-11-21 05:07:21.045510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:04.490 [2024-11-21 05:07:21.045518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:04.490 [2024-11-21 05:07:21.045527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:04.490 [2024-11-21 05:07:21.045542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:04.490 [2024-11-21 05:07:21.045570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:04.490 [2024-11-21 05:07:21.045604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:04.490 [2024-11-21 05:07:21.045648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:04.490 [2024-11-21 05:07:21.045670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:04.490 [2024-11-21 05:07:21.045692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.490 [2024-11-21 05:07:21.045708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:04.490 [2024-11-21 05:07:21.045716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:04.490 [2024-11-21 05:07:21.045723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:04.490 [2024-11-21 05:07:21.045730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:04.490 [2024-11-21 05:07:21.045737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:04.490 [2024-11-21 05:07:21.045747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:04.490 [2024-11-21 05:07:21.045763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:04.490 [2024-11-21 05:07:21.045772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045779] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:04.490 [2024-11-21 05:07:21.045788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:04.490 [2024-11-21 05:07:21.045796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:04.490 [2024-11-21 05:07:21.045815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:04.490 [2024-11-21 05:07:21.045823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:04.490 [2024-11-21 05:07:21.045830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:04.490 [2024-11-21 05:07:21.045837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:04.490 [2024-11-21 05:07:21.045843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:04.490 [2024-11-21 05:07:21.045855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:04.490 [2024-11-21 05:07:21.045866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:04.490 [2024-11-21 05:07:21.045876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.490 [2024-11-21 05:07:21.045891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:04.490 [2024-11-21 05:07:21.045900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:04.490 [2024-11-21 05:07:21.045908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:04.490 [2024-11-21 05:07:21.045915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:04.490 [2024-11-21 05:07:21.045922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:04.490 [2024-11-21 05:07:21.045929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:04.490 [2024-11-21 05:07:21.045936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:04.490 [2024-11-21 05:07:21.045944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:04.490 [2024-11-21 05:07:21.045953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:04.490 [2024-11-21 05:07:21.045960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:04.491 [2024-11-21 05:07:21.045967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:04.491 [2024-11-21 05:07:21.045974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:04.491 [2024-11-21 05:07:21.045981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:04.491 [2024-11-21 05:07:21.045989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:04.491 [2024-11-21 05:07:21.045997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:04.491 [2024-11-21 05:07:21.046009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:04.491 [2024-11-21 05:07:21.046025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:04.491 [2024-11-21 05:07:21.046034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:04.491 [2024-11-21 05:07:21.046041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:04.491 [2024-11-21 05:07:21.046048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:04.491 [2024-11-21 05:07:21.046056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.046066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:04.491 [2024-11-21 05:07:21.046077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:21:04.491 [2024-11-21 05:07:21.046085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.067360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.067584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.491 [2024-11-21 05:07:21.067877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.192 ms 00:21:04.491 [2024-11-21 05:07:21.067926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.068096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.068148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:04.491 [2024-11-21 05:07:21.068172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:04.491 [2024-11-21 05:07:21.068275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.092358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.092629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.491 [2024-11-21 05:07:21.092917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.036 ms 00:21:04.491 [2024-11-21 05:07:21.092991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.093160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.093234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.491 [2024-11-21 05:07:21.093287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:04.491 [2024-11-21 05:07:21.093332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.094180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.094373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.491 [2024-11-21 05:07:21.094451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:21:04.491 [2024-11-21 05:07:21.094488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.094752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.094791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.491 [2024-11-21 05:07:21.094827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:21:04.491 [2024-11-21 05:07:21.094855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.107394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.107571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.491 [2024-11-21 05:07:21.107690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.490 ms 00:21:04.491 [2024-11-21 05:07:21.107717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.112820] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:04.491 [2024-11-21 05:07:21.113019] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:04.491 [2024-11-21 05:07:21.113100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.113122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:04.491 [2024-11-21 05:07:21.113179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.229 ms 00:21:04.491 [2024-11-21 05:07:21.113210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.130346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.130531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:04.491 [2024-11-21 05:07:21.130596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.020 ms 00:21:04.491 [2024-11-21 05:07:21.130644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.134232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.134411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:04.491 [2024-11-21 05:07:21.134471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:21:04.491 [2024-11-21 05:07:21.134494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.137824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.138005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:04.491 [2024-11-21 05:07:21.138064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:21:04.491 [2024-11-21 05:07:21.138087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.138604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.138790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:04.491 [2024-11-21 05:07:21.138868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:21:04.491 [2024-11-21 05:07:21.138892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.169056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.169301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:04.491 [2024-11-21 05:07:21.169325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.116 ms 00:21:04.491 [2024-11-21 05:07:21.169335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.178197] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:04.491 [2024-11-21 05:07:21.203102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.203162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:04.491 [2024-11-21 05:07:21.203178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.644 ms 00:21:04.491 [2024-11-21 05:07:21.203199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.203321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.203336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:04.491 [2024-11-21 05:07:21.203348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:04.491 [2024-11-21 05:07:21.203360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.203435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.203447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:04.491 [2024-11-21 05:07:21.203459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:04.491 [2024-11-21 05:07:21.203469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.203507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.203518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:04.491 [2024-11-21 05:07:21.203528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:04.491 [2024-11-21 05:07:21.203537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.203582] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:04.491 [2024-11-21 05:07:21.203594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.203603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:04.491 [2024-11-21 05:07:21.203653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:04.491 [2024-11-21 05:07:21.203663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.211340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.211567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:04.491 [2024-11-21 05:07:21.211588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.649 ms 00:21:04.491 [2024-11-21 05:07:21.211597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.211732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.491 [2024-11-21 05:07:21.211748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:04.491 [2024-11-21 05:07:21.211758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:04.491 [2024-11-21 05:07:21.211767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.491 [2024-11-21 05:07:21.213088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.491 [2024-11-21 05:07:21.214837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 195.571 ms, result 0 00:21:04.491 [2024-11-21 05:07:21.216291] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.754 [2024-11-21 05:07:21.223528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.700  [2024-11-21T05:07:23.463Z] Copying: 14/256 [MB] (14 MBps) [2024-11-21T05:07:24.407Z] Copying: 34/256 [MB] (20 MBps) [2024-11-21T05:07:25.351Z] Copying: 53/256 [MB] (19 MBps) [2024-11-21T05:07:26.294Z] Copying: 71/256 [MB] (18 MBps) [2024-11-21T05:07:27.682Z] Copying: 93/256 [MB] (21 MBps) [2024-11-21T05:07:28.627Z] Copying: 106/256 [MB] (12 MBps) [2024-11-21T05:07:29.571Z] Copying: 121/256 [MB] (15 MBps) [2024-11-21T05:07:30.514Z] Copying: 144/256 [MB] (22 MBps) [2024-11-21T05:07:31.458Z] Copying: 164/256 [MB] (20 MBps) [2024-11-21T05:07:32.399Z] Copying: 187/256 [MB] (22 MBps) [2024-11-21T05:07:33.341Z] Copying: 208/256 [MB] (21 MBps) [2024-11-21T05:07:34.282Z] Copying: 231/256 [MB] (23 MBps) [2024-11-21T05:07:34.542Z] Copying: 254/256 [MB] (22 MBps) [2024-11-21T05:07:34.805Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-21 05:07:34.672294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:18.071 [2024-11-21 05:07:34.675380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.675754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:18.071 [2024-11-21 05:07:34.675818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:18.071 [2024-11-21 05:07:34.675838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.675901] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:18.071 [2024-11-21 05:07:34.677041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.677103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:18.071 [2024-11-21 05:07:34.677125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:21:18.071 [2024-11-21 05:07:34.677195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.677835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.677866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:18.071 [2024-11-21 05:07:34.677880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:21:18.071 [2024-11-21 05:07:34.677895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.683659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.683701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:18.071 [2024-11-21 05:07:34.683719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.743 ms 00:21:18.071 [2024-11-21 05:07:34.683730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.692498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.692547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:18.071 [2024-11-21 05:07:34.692571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.705 ms 00:21:18.071 [2024-11-21 05:07:34.692591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.695553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.695775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:18.071 [2024-11-21 05:07:34.695797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:21:18.071 [2024-11-21 05:07:34.695809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.701194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.701376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:18.071 [2024-11-21 05:07:34.701445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.336 ms 00:21:18.071 [2024-11-21 05:07:34.701469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.701647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.701727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:18.071 [2024-11-21 05:07:34.701770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:18.071 [2024-11-21 05:07:34.701796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.071 [2024-11-21 05:07:34.705494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.071 [2024-11-21 05:07:34.705731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:18.071 [2024-11-21 05:07:34.705848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:21:18.072 [2024-11-21 05:07:34.705877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.072 [2024-11-21 05:07:34.709089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.072 [2024-11-21 05:07:34.709306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:18.072 [2024-11-21 05:07:34.709372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:21:18.072 [2024-11-21 05:07:34.709399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.072 [2024-11-21 05:07:34.711718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.072 [2024-11-21 05:07:34.711874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:18.072 [2024-11-21 05:07:34.711929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:21:18.072 [2024-11-21 05:07:34.711952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.072 [2024-11-21 05:07:34.714048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.072 [2024-11-21 05:07:34.714201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:18.072 [2024-11-21 05:07:34.714255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:21:18.072 [2024-11-21 05:07:34.714277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.072 [2024-11-21 05:07:34.714424] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:18.072 [2024-11-21 05:07:34.714545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.714988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.715964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.716998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:18.072 [2024-11-21 05:07:34.717230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:18.073 [2024-11-21 05:07:34.717451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:18.073 [2024-11-21 05:07:34.717462] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8ac88a35-15dc-41bb-b029-a6466d3507cf 00:21:18.073 [2024-11-21 05:07:34.717471] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:18.073 [2024-11-21 05:07:34.717479] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:18.073 [2024-11-21 05:07:34.717493] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:18.073 [2024-11-21 05:07:34.717503] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:18.073 [2024-11-21 05:07:34.717512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:18.073 [2024-11-21 05:07:34.717522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:18.073 [2024-11-21 05:07:34.717529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:18.073 [2024-11-21 05:07:34.717536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:18.073 [2024-11-21 05:07:34.717543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:18.073 [2024-11-21 05:07:34.717553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.073 [2024-11-21 05:07:34.717566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:18.073 [2024-11-21 05:07:34.717576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:21:18.073 [2024-11-21 05:07:34.717585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.720967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.073 [2024-11-21 05:07:34.721127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:18.073 [2024-11-21 05:07:34.721198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.185 ms 00:21:18.073 [2024-11-21 05:07:34.721221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.721439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.073 [2024-11-21 05:07:34.721477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:18.073 [2024-11-21 05:07:34.721511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:18.073 [2024-11-21 05:07:34.721534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.732571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.732762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.073 [2024-11-21 05:07:34.732818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.732843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.732955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.732980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.073 [2024-11-21 05:07:34.733053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.733077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.733163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.733175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.073 [2024-11-21 05:07:34.733185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.733193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.733217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.733227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.073 [2024-11-21 05:07:34.733236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.733245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.752449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.752666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.073 [2024-11-21 05:07:34.752686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.752696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.767738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.767932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.073 [2024-11-21 05:07:34.767951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.767961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.768034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.768046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.073 [2024-11-21 05:07:34.768058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.768067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.768101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.768126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.073 [2024-11-21 05:07:34.768136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.768149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.768243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.768254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.073 [2024-11-21 05:07:34.768264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.768273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.768308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.768320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:18.073 [2024-11-21 05:07:34.768334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.768343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.768399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.768411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.073 [2024-11-21 05:07:34.768420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.768429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.768494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:18.073 [2024-11-21 05:07:34.768507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.073 [2024-11-21 05:07:34.768522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:18.073 [2024-11-21 05:07:34.768532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.073 [2024-11-21 05:07:34.768757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 93.354 ms, result 0 00:21:18.334 00:21:18.334 00:21:18.334 05:07:35 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:19.278 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:19.278 05:07:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:19.278 05:07:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:19.278 05:07:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:19.278 05:07:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:19.278 05:07:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:19.278 05:07:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:19.278 Process with pid 88331 is not found 00:21:19.278 05:07:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 88331 00:21:19.278 05:07:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88331 ']' 00:21:19.278 05:07:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88331 00:21:19.278 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88331) - No such process 00:21:19.278 05:07:35 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 88331 is not found' 00:21:19.278 ************************************ 00:21:19.278 END TEST ftl_trim 00:21:19.278 ************************************ 00:21:19.278 00:21:19.278 real 1m10.327s 00:21:19.278 user 1m33.051s 00:21:19.278 sys 0m5.632s 00:21:19.278 05:07:35 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.278 05:07:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:19.278 05:07:35 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:19.278 05:07:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:19.278 05:07:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.278 05:07:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:19.278 ************************************ 00:21:19.278 START TEST ftl_restore 00:21:19.278 ************************************ 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:19.278 * Looking for test storage... 00:21:19.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.278 05:07:35 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:19.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.278 --rc genhtml_branch_coverage=1 00:21:19.278 --rc genhtml_function_coverage=1 00:21:19.278 --rc genhtml_legend=1 00:21:19.278 --rc geninfo_all_blocks=1 00:21:19.278 --rc geninfo_unexecuted_blocks=1 00:21:19.278 00:21:19.278 ' 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:19.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.278 --rc genhtml_branch_coverage=1 00:21:19.278 --rc genhtml_function_coverage=1 00:21:19.278 --rc genhtml_legend=1 00:21:19.278 --rc geninfo_all_blocks=1 00:21:19.278 --rc geninfo_unexecuted_blocks=1 00:21:19.278 00:21:19.278 ' 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:19.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.278 --rc genhtml_branch_coverage=1 00:21:19.278 --rc genhtml_function_coverage=1 00:21:19.278 --rc genhtml_legend=1 00:21:19.278 --rc geninfo_all_blocks=1 00:21:19.278 --rc geninfo_unexecuted_blocks=1 00:21:19.278 00:21:19.278 ' 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:19.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.278 --rc genhtml_branch_coverage=1 00:21:19.278 --rc genhtml_function_coverage=1 00:21:19.278 --rc genhtml_legend=1 00:21:19.278 --rc geninfo_all_blocks=1 00:21:19.278 --rc geninfo_unexecuted_blocks=1 00:21:19.278 00:21:19.278 ' 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.BzwPmATtPB 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88598 00:21:19.278 05:07:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88598 00:21:19.278 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88598 ']' 00:21:19.279 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.279 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.279 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.279 05:07:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:19.279 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.279 05:07:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:19.540 [2024-11-21 05:07:36.076387] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:21:19.540 [2024-11-21 05:07:36.076557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88598 ] 00:21:19.540 [2024-11-21 05:07:36.239767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.801 [2024-11-21 05:07:36.280654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.375 05:07:36 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.375 05:07:36 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:20.375 05:07:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:20.375 05:07:36 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:20.375 05:07:36 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:20.375 05:07:36 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:20.375 05:07:36 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:20.375 05:07:36 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:20.636 05:07:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:20.636 05:07:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:20.636 05:07:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:20.636 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:20.636 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:20.636 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:20.636 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:20.636 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:20.898 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:20.898 { 00:21:20.898 "name": "nvme0n1", 00:21:20.898 "aliases": [ 00:21:20.898 "4bc09b44-50d8-4fdd-9808-9f090a9a6c85" 00:21:20.898 ], 00:21:20.898 "product_name": "NVMe disk", 00:21:20.898 "block_size": 4096, 00:21:20.898 "num_blocks": 1310720, 00:21:20.898 "uuid": "4bc09b44-50d8-4fdd-9808-9f090a9a6c85", 00:21:20.898 "numa_id": -1, 00:21:20.898 "assigned_rate_limits": { 00:21:20.898 "rw_ios_per_sec": 0, 00:21:20.898 "rw_mbytes_per_sec": 0, 00:21:20.898 "r_mbytes_per_sec": 0, 00:21:20.898 "w_mbytes_per_sec": 0 00:21:20.898 }, 00:21:20.898 "claimed": true, 00:21:20.898 "claim_type": "read_many_write_one", 00:21:20.898 "zoned": false, 00:21:20.898 "supported_io_types": { 00:21:20.898 "read": true, 00:21:20.898 "write": true, 00:21:20.898 "unmap": true, 00:21:20.898 "flush": true, 00:21:20.898 "reset": true, 00:21:20.898 "nvme_admin": true, 00:21:20.898 "nvme_io": true, 00:21:20.898 "nvme_io_md": false, 00:21:20.898 "write_zeroes": true, 00:21:20.898 "zcopy": false, 00:21:20.898 "get_zone_info": false, 00:21:20.898 "zone_management": false, 00:21:20.898 "zone_append": false, 00:21:20.898 "compare": true, 00:21:20.898 "compare_and_write": false, 00:21:20.898 "abort": true, 00:21:20.898 "seek_hole": false, 00:21:20.898 "seek_data": false, 00:21:20.898 "copy": true, 00:21:20.898 "nvme_iov_md": false 00:21:20.898 }, 00:21:20.898 "driver_specific": { 00:21:20.898 "nvme": [ 00:21:20.898 { 00:21:20.898 "pci_address": "0000:00:11.0", 00:21:20.898 "trid": { 00:21:20.898 "trtype": "PCIe", 00:21:20.898 "traddr": "0000:00:11.0" 00:21:20.898 }, 00:21:20.898 "ctrlr_data": { 00:21:20.898 "cntlid": 0, 00:21:20.898 "vendor_id": "0x1b36", 00:21:20.898 "model_number": "QEMU NVMe Ctrl", 00:21:20.898 "serial_number": "12341", 00:21:20.898 "firmware_revision": "8.0.0", 00:21:20.898 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:20.898 "oacs": { 00:21:20.898 "security": 0, 00:21:20.898 "format": 1, 00:21:20.898 "firmware": 0, 00:21:20.898 "ns_manage": 1 00:21:20.898 }, 00:21:20.898 "multi_ctrlr": false, 00:21:20.898 "ana_reporting": false 00:21:20.898 }, 00:21:20.898 "vs": { 00:21:20.898 "nvme_version": "1.4" 00:21:20.898 }, 00:21:20.898 "ns_data": { 00:21:20.898 "id": 1, 00:21:20.898 "can_share": false 00:21:20.898 } 00:21:20.898 } 00:21:20.898 ], 00:21:20.898 "mp_policy": "active_passive" 00:21:20.898 } 00:21:20.898 } 00:21:20.898 ]' 00:21:20.898 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:20.898 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:20.898 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:20.898 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:20.898 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:20.898 05:07:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:20.898 05:07:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:20.898 05:07:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:20.898 05:07:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:20.898 05:07:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:20.898 05:07:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:21.159 05:07:37 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=c543dc97-6bb7-48cd-bf88-5b8e92137259 00:21:21.159 05:07:37 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:21.159 05:07:37 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c543dc97-6bb7-48cd-bf88-5b8e92137259 00:21:21.420 05:07:37 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:21.682 05:07:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=fad6dd4a-ba6c-403d-a636-9972cdfd04f5 00:21:21.682 05:07:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fad6dd4a-ba6c-403d-a636-9972cdfd04f5 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=34f94f34-2291-4a7c-95e6-fa198527226f 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 34f94f34-2291-4a7c-95e6-fa198527226f 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=34f94f34-2291-4a7c-95e6-fa198527226f 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:21.944 05:07:38 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 34f94f34-2291-4a7c-95e6-fa198527226f 00:21:21.944 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=34f94f34-2291-4a7c-95e6-fa198527226f 00:21:21.944 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:21.944 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:21.944 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:21.944 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34f94f34-2291-4a7c-95e6-fa198527226f 00:21:21.944 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:21.944 { 00:21:21.944 "name": "34f94f34-2291-4a7c-95e6-fa198527226f", 00:21:21.944 "aliases": [ 00:21:21.944 "lvs/nvme0n1p0" 00:21:21.944 ], 00:21:21.944 "product_name": "Logical Volume", 00:21:21.944 "block_size": 4096, 00:21:21.944 "num_blocks": 26476544, 00:21:21.944 "uuid": "34f94f34-2291-4a7c-95e6-fa198527226f", 00:21:21.944 "assigned_rate_limits": { 00:21:21.944 "rw_ios_per_sec": 0, 00:21:21.944 "rw_mbytes_per_sec": 0, 00:21:21.944 "r_mbytes_per_sec": 0, 00:21:21.944 "w_mbytes_per_sec": 0 00:21:21.944 }, 00:21:21.944 "claimed": false, 00:21:21.944 "zoned": false, 00:21:21.944 "supported_io_types": { 00:21:21.944 "read": true, 00:21:21.944 "write": true, 00:21:21.944 "unmap": true, 00:21:21.944 "flush": false, 00:21:21.944 "reset": true, 00:21:21.944 "nvme_admin": false, 00:21:21.944 "nvme_io": false, 00:21:21.944 "nvme_io_md": false, 00:21:21.944 "write_zeroes": true, 00:21:21.944 "zcopy": false, 00:21:21.944 "get_zone_info": false, 00:21:21.944 "zone_management": false, 00:21:21.944 "zone_append": false, 00:21:21.944 "compare": false, 00:21:21.944 "compare_and_write": false, 00:21:21.944 "abort": false, 00:21:21.944 "seek_hole": true, 00:21:21.944 "seek_data": true, 00:21:21.944 "copy": false, 00:21:21.944 "nvme_iov_md": false 00:21:21.944 }, 00:21:21.944 "driver_specific": { 00:21:21.944 "lvol": { 00:21:21.944 "lvol_store_uuid": "fad6dd4a-ba6c-403d-a636-9972cdfd04f5", 00:21:21.944 "base_bdev": "nvme0n1", 00:21:21.944 "thin_provision": true, 00:21:21.944 "num_allocated_clusters": 0, 00:21:21.944 "snapshot": false, 00:21:21.944 "clone": false, 00:21:21.944 "esnap_clone": false 00:21:21.944 } 00:21:21.944 } 00:21:21.944 } 00:21:21.944 ]' 00:21:21.944 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:22.207 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:22.207 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:22.207 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:22.207 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:22.207 05:07:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:22.207 05:07:38 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:22.207 05:07:38 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:22.207 05:07:38 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:22.466 05:07:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:22.466 05:07:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:22.466 05:07:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 34f94f34-2291-4a7c-95e6-fa198527226f 00:21:22.466 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=34f94f34-2291-4a7c-95e6-fa198527226f 00:21:22.466 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:22.466 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:22.466 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:22.466 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34f94f34-2291-4a7c-95e6-fa198527226f 00:21:22.725 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:22.725 { 00:21:22.725 "name": "34f94f34-2291-4a7c-95e6-fa198527226f", 00:21:22.725 "aliases": [ 00:21:22.725 "lvs/nvme0n1p0" 00:21:22.725 ], 00:21:22.725 "product_name": "Logical Volume", 00:21:22.725 "block_size": 4096, 00:21:22.725 "num_blocks": 26476544, 00:21:22.725 "uuid": "34f94f34-2291-4a7c-95e6-fa198527226f", 00:21:22.725 "assigned_rate_limits": { 00:21:22.725 "rw_ios_per_sec": 0, 00:21:22.725 "rw_mbytes_per_sec": 0, 00:21:22.725 "r_mbytes_per_sec": 0, 00:21:22.725 "w_mbytes_per_sec": 0 00:21:22.725 }, 00:21:22.725 "claimed": false, 00:21:22.725 "zoned": false, 00:21:22.725 "supported_io_types": { 00:21:22.725 "read": true, 00:21:22.725 "write": true, 00:21:22.725 "unmap": true, 00:21:22.725 "flush": false, 00:21:22.725 "reset": true, 00:21:22.725 "nvme_admin": false, 00:21:22.725 "nvme_io": false, 00:21:22.725 "nvme_io_md": false, 00:21:22.725 "write_zeroes": true, 00:21:22.725 "zcopy": false, 00:21:22.725 "get_zone_info": false, 00:21:22.725 "zone_management": false, 00:21:22.725 "zone_append": false, 00:21:22.725 "compare": false, 00:21:22.725 "compare_and_write": false, 00:21:22.725 "abort": false, 00:21:22.725 "seek_hole": true, 00:21:22.725 "seek_data": true, 00:21:22.725 "copy": false, 00:21:22.725 "nvme_iov_md": false 00:21:22.725 }, 00:21:22.725 "driver_specific": { 00:21:22.725 "lvol": { 00:21:22.725 "lvol_store_uuid": "fad6dd4a-ba6c-403d-a636-9972cdfd04f5", 00:21:22.725 "base_bdev": "nvme0n1", 00:21:22.725 "thin_provision": true, 00:21:22.725 "num_allocated_clusters": 0, 00:21:22.725 "snapshot": false, 00:21:22.725 "clone": false, 00:21:22.725 "esnap_clone": false 00:21:22.725 } 00:21:22.725 } 00:21:22.725 } 00:21:22.725 ]' 00:21:22.725 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:22.725 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:22.725 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:22.725 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:22.725 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:22.725 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:22.725 05:07:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:22.725 05:07:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:22.983 05:07:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:22.983 05:07:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 34f94f34-2291-4a7c-95e6-fa198527226f 00:21:22.983 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=34f94f34-2291-4a7c-95e6-fa198527226f 00:21:22.983 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:22.983 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:22.983 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:22.983 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34f94f34-2291-4a7c-95e6-fa198527226f 00:21:23.241 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:23.241 { 00:21:23.241 "name": "34f94f34-2291-4a7c-95e6-fa198527226f", 00:21:23.241 "aliases": [ 00:21:23.241 "lvs/nvme0n1p0" 00:21:23.241 ], 00:21:23.241 "product_name": "Logical Volume", 00:21:23.241 "block_size": 4096, 00:21:23.241 "num_blocks": 26476544, 00:21:23.241 "uuid": "34f94f34-2291-4a7c-95e6-fa198527226f", 00:21:23.241 "assigned_rate_limits": { 00:21:23.241 "rw_ios_per_sec": 0, 00:21:23.241 "rw_mbytes_per_sec": 0, 00:21:23.241 "r_mbytes_per_sec": 0, 00:21:23.241 "w_mbytes_per_sec": 0 00:21:23.241 }, 00:21:23.241 "claimed": false, 00:21:23.241 "zoned": false, 00:21:23.241 "supported_io_types": { 00:21:23.241 "read": true, 00:21:23.241 "write": true, 00:21:23.241 "unmap": true, 00:21:23.241 "flush": false, 00:21:23.241 "reset": true, 00:21:23.241 "nvme_admin": false, 00:21:23.241 "nvme_io": false, 00:21:23.241 "nvme_io_md": false, 00:21:23.241 "write_zeroes": true, 00:21:23.241 "zcopy": false, 00:21:23.241 "get_zone_info": false, 00:21:23.241 "zone_management": false, 00:21:23.241 "zone_append": false, 00:21:23.241 "compare": false, 00:21:23.241 "compare_and_write": false, 00:21:23.241 "abort": false, 00:21:23.241 "seek_hole": true, 00:21:23.241 "seek_data": true, 00:21:23.241 "copy": false, 00:21:23.241 "nvme_iov_md": false 00:21:23.241 }, 00:21:23.241 "driver_specific": { 00:21:23.241 "lvol": { 00:21:23.241 "lvol_store_uuid": "fad6dd4a-ba6c-403d-a636-9972cdfd04f5", 00:21:23.241 "base_bdev": "nvme0n1", 00:21:23.241 "thin_provision": true, 00:21:23.241 "num_allocated_clusters": 0, 00:21:23.241 "snapshot": false, 00:21:23.241 "clone": false, 00:21:23.241 "esnap_clone": false 00:21:23.241 } 00:21:23.241 } 00:21:23.241 } 00:21:23.241 ]' 00:21:23.241 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:23.241 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:23.241 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:23.242 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:23.242 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:23.242 05:07:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:23.242 05:07:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:23.242 05:07:39 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 34f94f34-2291-4a7c-95e6-fa198527226f --l2p_dram_limit 10' 00:21:23.242 05:07:39 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:23.242 05:07:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:23.242 05:07:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:23.242 05:07:39 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:23.242 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:23.242 05:07:39 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 34f94f34-2291-4a7c-95e6-fa198527226f --l2p_dram_limit 10 -c nvc0n1p0 00:21:23.502 [2024-11-21 05:07:39.976106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.976151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.502 [2024-11-21 05:07:39.976168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:23.502 [2024-11-21 05:07:39.976177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.976233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.976244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.502 [2024-11-21 05:07:39.976253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:23.502 [2024-11-21 05:07:39.976262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.976281] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.502 [2024-11-21 05:07:39.976535] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.502 [2024-11-21 05:07:39.976551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.976560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.502 [2024-11-21 05:07:39.976567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:23.502 [2024-11-21 05:07:39.976576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.976604] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d9c670ef-39b5-4ccb-82f5-cc919b0b9ecb 00:21:23.502 [2024-11-21 05:07:39.977927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.977952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:23.502 [2024-11-21 05:07:39.977965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:23.502 [2024-11-21 05:07:39.977972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.984888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.985031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.502 [2024-11-21 05:07:39.985049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:21:23.502 [2024-11-21 05:07:39.985056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.985170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.985181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.502 [2024-11-21 05:07:39.985194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:23.502 [2024-11-21 05:07:39.985201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.985266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.985277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.502 [2024-11-21 05:07:39.985285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:23.502 [2024-11-21 05:07:39.985291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.985314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:23.502 [2024-11-21 05:07:39.986993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.987020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.502 [2024-11-21 05:07:39.987028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.688 ms 00:21:23.502 [2024-11-21 05:07:39.987036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.987065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.502 [2024-11-21 05:07:39.987076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.502 [2024-11-21 05:07:39.987085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:23.502 [2024-11-21 05:07:39.987095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.502 [2024-11-21 05:07:39.987108] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:23.502 [2024-11-21 05:07:39.987224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.502 [2024-11-21 05:07:39.987235] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.502 [2024-11-21 05:07:39.987252] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:23.502 [2024-11-21 05:07:39.987260] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.502 [2024-11-21 05:07:39.987274] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987281] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:23.503 [2024-11-21 05:07:39.987290] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.503 [2024-11-21 05:07:39.987296] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.503 [2024-11-21 05:07:39.987303] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.503 [2024-11-21 05:07:39.987309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.503 [2024-11-21 05:07:39.987317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.503 [2024-11-21 05:07:39.987323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:21:23.503 [2024-11-21 05:07:39.987331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.503 [2024-11-21 05:07:39.987395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.503 [2024-11-21 05:07:39.987407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.503 [2024-11-21 05:07:39.987414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:23.503 [2024-11-21 05:07:39.987423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.503 [2024-11-21 05:07:39.987500] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.503 [2024-11-21 05:07:39.987509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.503 [2024-11-21 05:07:39.987519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.503 [2024-11-21 05:07:39.987540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.503 [2024-11-21 05:07:39.987557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.503 [2024-11-21 05:07:39.987569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.503 [2024-11-21 05:07:39.987575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:23.503 [2024-11-21 05:07:39.987580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.503 [2024-11-21 05:07:39.987588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.503 [2024-11-21 05:07:39.987594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:23.503 [2024-11-21 05:07:39.987600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.503 [2024-11-21 05:07:39.987641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.503 [2024-11-21 05:07:39.987662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.503 [2024-11-21 05:07:39.987683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.503 [2024-11-21 05:07:39.987702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.503 [2024-11-21 05:07:39.987726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.503 [2024-11-21 05:07:39.987746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.503 [2024-11-21 05:07:39.987760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.503 [2024-11-21 05:07:39.987768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:23.503 [2024-11-21 05:07:39.987773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.503 [2024-11-21 05:07:39.987781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.503 [2024-11-21 05:07:39.987786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:23.503 [2024-11-21 05:07:39.987793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.503 [2024-11-21 05:07:39.987806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:23.503 [2024-11-21 05:07:39.987812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987818] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.503 [2024-11-21 05:07:39.987825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.503 [2024-11-21 05:07:39.987835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.503 [2024-11-21 05:07:39.987852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.503 [2024-11-21 05:07:39.987860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.503 [2024-11-21 05:07:39.987868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.503 [2024-11-21 05:07:39.987875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.503 [2024-11-21 05:07:39.987882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.503 [2024-11-21 05:07:39.987888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.503 [2024-11-21 05:07:39.987899] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.503 [2024-11-21 05:07:39.987909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.503 [2024-11-21 05:07:39.987918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:23.503 [2024-11-21 05:07:39.987925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:23.503 [2024-11-21 05:07:39.987932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:23.503 [2024-11-21 05:07:39.987938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:23.503 [2024-11-21 05:07:39.987946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:23.503 [2024-11-21 05:07:39.987952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:23.503 [2024-11-21 05:07:39.987961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:23.503 [2024-11-21 05:07:39.987967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:23.503 [2024-11-21 05:07:39.987975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:23.503 [2024-11-21 05:07:39.987982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:23.503 [2024-11-21 05:07:39.987989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:23.503 [2024-11-21 05:07:39.987995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:23.503 [2024-11-21 05:07:39.988003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:23.503 [2024-11-21 05:07:39.988010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:23.503 [2024-11-21 05:07:39.988017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.503 [2024-11-21 05:07:39.988023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.503 [2024-11-21 05:07:39.988031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.503 [2024-11-21 05:07:39.988036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.503 [2024-11-21 05:07:39.988050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.503 [2024-11-21 05:07:39.988055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.503 [2024-11-21 05:07:39.988063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.503 [2024-11-21 05:07:39.988069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.503 [2024-11-21 05:07:39.988078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:21:23.503 [2024-11-21 05:07:39.988083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.503 [2024-11-21 05:07:39.988114] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:23.503 [2024-11-21 05:07:39.988123] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:27.712 [2024-11-21 05:07:44.081217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.081336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:27.712 [2024-11-21 05:07:44.081360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4093.069 ms 00:21:27.712 [2024-11-21 05:07:44.081370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.100755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.100818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:27.712 [2024-11-21 05:07:44.100837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.238 ms 00:21:27.712 [2024-11-21 05:07:44.100848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.100989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.101001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:27.712 [2024-11-21 05:07:44.101016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:27.712 [2024-11-21 05:07:44.101025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.118279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.118336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.712 [2024-11-21 05:07:44.118352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.180 ms 00:21:27.712 [2024-11-21 05:07:44.118362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.118409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.118418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.712 [2024-11-21 05:07:44.118430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:27.712 [2024-11-21 05:07:44.118439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.119181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.119219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.712 [2024-11-21 05:07:44.119236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:21:27.712 [2024-11-21 05:07:44.119246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.119377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.119398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.712 [2024-11-21 05:07:44.119411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:27.712 [2024-11-21 05:07:44.119421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.131317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.131504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.712 [2024-11-21 05:07:44.131758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.868 ms 00:21:27.712 [2024-11-21 05:07:44.131789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.143303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:27.712 [2024-11-21 05:07:44.148390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.148560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:27.712 [2024-11-21 05:07:44.148655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.481 ms 00:21:27.712 [2024-11-21 05:07:44.148687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.236700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.236878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:27.712 [2024-11-21 05:07:44.236944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.962 ms 00:21:27.712 [2024-11-21 05:07:44.236970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.237169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.237198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:27.712 [2024-11-21 05:07:44.237267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:21:27.712 [2024-11-21 05:07:44.237289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.241162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.241198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:27.712 [2024-11-21 05:07:44.241208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.843 ms 00:21:27.712 [2024-11-21 05:07:44.241220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.244023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.244059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:27.712 [2024-11-21 05:07:44.244068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:21:27.712 [2024-11-21 05:07:44.244077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.244346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.244358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:27.712 [2024-11-21 05:07:44.244366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:21:27.712 [2024-11-21 05:07:44.244375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.279644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.279674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:27.712 [2024-11-21 05:07:44.279683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.254 ms 00:21:27.712 [2024-11-21 05:07:44.279694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.284471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.284502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:27.712 [2024-11-21 05:07:44.284510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:21:27.712 [2024-11-21 05:07:44.284518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.288055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.288083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:27.712 [2024-11-21 05:07:44.288091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.509 ms 00:21:27.712 [2024-11-21 05:07:44.288098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.292161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.292193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:27.712 [2024-11-21 05:07:44.292201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.036 ms 00:21:27.712 [2024-11-21 05:07:44.292211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.292242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.712 [2024-11-21 05:07:44.292252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:27.712 [2024-11-21 05:07:44.292260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:27.712 [2024-11-21 05:07:44.292274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.712 [2024-11-21 05:07:44.292332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.713 [2024-11-21 05:07:44.292341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:27.713 [2024-11-21 05:07:44.292352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:27.713 [2024-11-21 05:07:44.292360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.713 [2024-11-21 05:07:44.293220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4316.733 ms, result 0 00:21:27.713 { 00:21:27.713 "name": "ftl0", 00:21:27.713 "uuid": "d9c670ef-39b5-4ccb-82f5-cc919b0b9ecb" 00:21:27.713 } 00:21:27.713 05:07:44 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:27.713 05:07:44 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:27.971 05:07:44 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:27.971 05:07:44 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:27.971 [2024-11-21 05:07:44.702487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.971 [2024-11-21 05:07:44.702601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:27.971 [2024-11-21 05:07:44.702632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:27.971 [2024-11-21 05:07:44.702644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.971 [2024-11-21 05:07:44.702668] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:27.971 [2024-11-21 05:07:44.703198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.971 [2024-11-21 05:07:44.703216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:27.971 [2024-11-21 05:07:44.703224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:21:27.971 [2024-11-21 05:07:44.703232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.971 [2024-11-21 05:07:44.703426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.971 [2024-11-21 05:07:44.703438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:27.971 [2024-11-21 05:07:44.703446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:21:27.971 [2024-11-21 05:07:44.703457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.706002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.706074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:28.233 [2024-11-21 05:07:44.706119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:21:28.233 [2024-11-21 05:07:44.706139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.710826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.710871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:28.233 [2024-11-21 05:07:44.710892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.661 ms 00:21:28.233 [2024-11-21 05:07:44.710910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.712230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.712332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:28.233 [2024-11-21 05:07:44.712377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:21:28.233 [2024-11-21 05:07:44.712397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.716289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.716384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:28.233 [2024-11-21 05:07:44.716430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.859 ms 00:21:28.233 [2024-11-21 05:07:44.716451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.716552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.716574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:28.233 [2024-11-21 05:07:44.716593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:28.233 [2024-11-21 05:07:44.716620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.718393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.718477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:28.233 [2024-11-21 05:07:44.718518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:21:28.233 [2024-11-21 05:07:44.718537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.719636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.719720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:28.233 [2024-11-21 05:07:44.719760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:21:28.233 [2024-11-21 05:07:44.719779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.721351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.721434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:28.233 [2024-11-21 05:07:44.721472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:21:28.233 [2024-11-21 05:07:44.721491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.723129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.233 [2024-11-21 05:07:44.723212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:28.233 [2024-11-21 05:07:44.723251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:21:28.233 [2024-11-21 05:07:44.723269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.233 [2024-11-21 05:07:44.723300] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:28.233 [2024-11-21 05:07:44.723324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:28.233 [2024-11-21 05:07:44.723693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.723998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:28.234 [2024-11-21 05:07:44.724066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:28.234 [2024-11-21 05:07:44.724072] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9c670ef-39b5-4ccb-82f5-cc919b0b9ecb 00:21:28.234 [2024-11-21 05:07:44.724080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:28.234 [2024-11-21 05:07:44.724086] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:28.234 [2024-11-21 05:07:44.724093] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:28.234 [2024-11-21 05:07:44.724103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:28.234 [2024-11-21 05:07:44.724110] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:28.234 [2024-11-21 05:07:44.724118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:28.234 [2024-11-21 05:07:44.724128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:28.234 [2024-11-21 05:07:44.724133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:28.234 [2024-11-21 05:07:44.724140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:28.234 [2024-11-21 05:07:44.724146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.234 [2024-11-21 05:07:44.724154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:28.234 [2024-11-21 05:07:44.724161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:21:28.234 [2024-11-21 05:07:44.724168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.725887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.234 [2024-11-21 05:07:44.725976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:28.234 [2024-11-21 05:07:44.725989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:21:28.234 [2024-11-21 05:07:44.725999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.726085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.234 [2024-11-21 05:07:44.726094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:28.234 [2024-11-21 05:07:44.726100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:28.234 [2024-11-21 05:07:44.726108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.732120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.234 [2024-11-21 05:07:44.732152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:28.234 [2024-11-21 05:07:44.732160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.234 [2024-11-21 05:07:44.732170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.732214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.234 [2024-11-21 05:07:44.732224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:28.234 [2024-11-21 05:07:44.732230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.234 [2024-11-21 05:07:44.732237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.732298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.234 [2024-11-21 05:07:44.732311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:28.234 [2024-11-21 05:07:44.732318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.234 [2024-11-21 05:07:44.732326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.732342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.234 [2024-11-21 05:07:44.732350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:28.234 [2024-11-21 05:07:44.732355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.234 [2024-11-21 05:07:44.732363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.743435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.234 [2024-11-21 05:07:44.743470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:28.234 [2024-11-21 05:07:44.743479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.234 [2024-11-21 05:07:44.743488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.234 [2024-11-21 05:07:44.752215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.234 [2024-11-21 05:07:44.752253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:28.234 [2024-11-21 05:07:44.752262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.235 [2024-11-21 05:07:44.752269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.235 [2024-11-21 05:07:44.752337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.235 [2024-11-21 05:07:44.752349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:28.235 [2024-11-21 05:07:44.752355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.235 [2024-11-21 05:07:44.752364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.235 [2024-11-21 05:07:44.752394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.235 [2024-11-21 05:07:44.752405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:28.235 [2024-11-21 05:07:44.752411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.235 [2024-11-21 05:07:44.752418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.235 [2024-11-21 05:07:44.752480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.235 [2024-11-21 05:07:44.752491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:28.235 [2024-11-21 05:07:44.752498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.235 [2024-11-21 05:07:44.752505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.235 [2024-11-21 05:07:44.752531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.235 [2024-11-21 05:07:44.752542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:28.235 [2024-11-21 05:07:44.752548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.235 [2024-11-21 05:07:44.752556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.235 [2024-11-21 05:07:44.752590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.235 [2024-11-21 05:07:44.752601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:28.235 [2024-11-21 05:07:44.752726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.235 [2024-11-21 05:07:44.752748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.235 [2024-11-21 05:07:44.752804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:28.235 [2024-11-21 05:07:44.752828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:28.235 [2024-11-21 05:07:44.752843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:28.235 [2024-11-21 05:07:44.752860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.235 [2024-11-21 05:07:44.752990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.468 ms, result 0 00:21:28.235 true 00:21:28.235 05:07:44 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88598 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88598 ']' 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88598 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88598 00:21:28.235 killing process with pid 88598 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88598' 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88598 00:21:28.235 05:07:44 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88598 00:21:33.531 05:07:50 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:37.786 262144+0 records in 00:21:37.786 262144+0 records out 00:21:37.786 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.70711 s, 290 MB/s 00:21:37.786 05:07:53 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:39.171 05:07:55 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:39.171 [2024-11-21 05:07:55.765336] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:21:39.171 [2024-11-21 05:07:55.765571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88819 ] 00:21:39.429 [2024-11-21 05:07:55.905351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.430 [2024-11-21 05:07:55.930120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.430 [2024-11-21 05:07:56.029787] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:39.430 [2024-11-21 05:07:56.029842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:39.690 [2024-11-21 05:07:56.177632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.177767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:39.690 [2024-11-21 05:07:56.177785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:39.690 [2024-11-21 05:07:56.177796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.177847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.177855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.690 [2024-11-21 05:07:56.177862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:39.690 [2024-11-21 05:07:56.177869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.177889] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:39.690 [2024-11-21 05:07:56.178102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:39.690 [2024-11-21 05:07:56.178117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.178125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.690 [2024-11-21 05:07:56.178132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:21:39.690 [2024-11-21 05:07:56.178140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.179402] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:39.690 [2024-11-21 05:07:56.181915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.181948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:39.690 [2024-11-21 05:07:56.181957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.515 ms 00:21:39.690 [2024-11-21 05:07:56.181966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.182013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.182026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:39.690 [2024-11-21 05:07:56.182034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:39.690 [2024-11-21 05:07:56.182040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.188295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.188321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.690 [2024-11-21 05:07:56.188333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.215 ms 00:21:39.690 [2024-11-21 05:07:56.188342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.188412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.188419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.690 [2024-11-21 05:07:56.188426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:39.690 [2024-11-21 05:07:56.188431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.188469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.188479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:39.690 [2024-11-21 05:07:56.188486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:39.690 [2024-11-21 05:07:56.188491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.188514] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:39.690 [2024-11-21 05:07:56.190104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.190127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.690 [2024-11-21 05:07:56.190134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:21:39.690 [2024-11-21 05:07:56.190145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.190170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.190176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:39.690 [2024-11-21 05:07:56.190183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:39.690 [2024-11-21 05:07:56.190189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.690 [2024-11-21 05:07:56.190207] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:39.690 [2024-11-21 05:07:56.190228] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:39.690 [2024-11-21 05:07:56.190257] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:39.690 [2024-11-21 05:07:56.190270] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:39.690 [2024-11-21 05:07:56.190353] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:39.690 [2024-11-21 05:07:56.190361] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:39.690 [2024-11-21 05:07:56.190369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:39.690 [2024-11-21 05:07:56.190380] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:39.690 [2024-11-21 05:07:56.190386] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:39.690 [2024-11-21 05:07:56.190396] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:39.690 [2024-11-21 05:07:56.190402] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:39.690 [2024-11-21 05:07:56.190408] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:39.690 [2024-11-21 05:07:56.190414] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:39.690 [2024-11-21 05:07:56.190420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.690 [2024-11-21 05:07:56.190426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:39.691 [2024-11-21 05:07:56.190432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:21:39.691 [2024-11-21 05:07:56.190437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.691 [2024-11-21 05:07:56.190503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.691 [2024-11-21 05:07:56.190511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:39.691 [2024-11-21 05:07:56.190522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:39.691 [2024-11-21 05:07:56.190530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.691 [2024-11-21 05:07:56.190622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:39.691 [2024-11-21 05:07:56.190630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:39.691 [2024-11-21 05:07:56.190637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:39.691 [2024-11-21 05:07:56.190661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:39.691 [2024-11-21 05:07:56.190677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.691 [2024-11-21 05:07:56.190688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:39.691 [2024-11-21 05:07:56.190693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:39.691 [2024-11-21 05:07:56.190700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.691 [2024-11-21 05:07:56.190705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:39.691 [2024-11-21 05:07:56.190711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:39.691 [2024-11-21 05:07:56.190716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:39.691 [2024-11-21 05:07:56.190729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:39.691 [2024-11-21 05:07:56.190745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:39.691 [2024-11-21 05:07:56.190763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:39.691 [2024-11-21 05:07:56.190781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:39.691 [2024-11-21 05:07:56.190802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:39.691 [2024-11-21 05:07:56.190819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.691 [2024-11-21 05:07:56.190831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:39.691 [2024-11-21 05:07:56.190837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:39.691 [2024-11-21 05:07:56.190842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.691 [2024-11-21 05:07:56.190848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:39.691 [2024-11-21 05:07:56.190854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:39.691 [2024-11-21 05:07:56.190860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:39.691 [2024-11-21 05:07:56.190871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:39.691 [2024-11-21 05:07:56.190878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190883] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:39.691 [2024-11-21 05:07:56.190892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:39.691 [2024-11-21 05:07:56.190900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.691 [2024-11-21 05:07:56.190914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:39.691 [2024-11-21 05:07:56.190920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:39.691 [2024-11-21 05:07:56.190927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:39.691 [2024-11-21 05:07:56.190933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:39.691 [2024-11-21 05:07:56.190939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:39.691 [2024-11-21 05:07:56.190945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:39.691 [2024-11-21 05:07:56.190952] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:39.691 [2024-11-21 05:07:56.190960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.691 [2024-11-21 05:07:56.190967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:39.691 [2024-11-21 05:07:56.190974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:39.691 [2024-11-21 05:07:56.190980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:39.691 [2024-11-21 05:07:56.190986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:39.691 [2024-11-21 05:07:56.190993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:39.691 [2024-11-21 05:07:56.191001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:39.691 [2024-11-21 05:07:56.191007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:39.691 [2024-11-21 05:07:56.191013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:39.691 [2024-11-21 05:07:56.191019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:39.691 [2024-11-21 05:07:56.191026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:39.691 [2024-11-21 05:07:56.191032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:39.691 [2024-11-21 05:07:56.191038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:39.691 [2024-11-21 05:07:56.191044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:39.691 [2024-11-21 05:07:56.191050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:39.691 [2024-11-21 05:07:56.191056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:39.691 [2024-11-21 05:07:56.191063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.691 [2024-11-21 05:07:56.191071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:39.691 [2024-11-21 05:07:56.191078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:39.691 [2024-11-21 05:07:56.191084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:39.691 [2024-11-21 05:07:56.191090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:39.691 [2024-11-21 05:07:56.191096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.691 [2024-11-21 05:07:56.191105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:39.691 [2024-11-21 05:07:56.191112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:21:39.691 [2024-11-21 05:07:56.191119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.691 [2024-11-21 05:07:56.202242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.691 [2024-11-21 05:07:56.202278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:39.691 [2024-11-21 05:07:56.202292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.081 ms 00:21:39.691 [2024-11-21 05:07:56.202301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.691 [2024-11-21 05:07:56.202376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.691 [2024-11-21 05:07:56.202383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:39.691 [2024-11-21 05:07:56.202389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:39.691 [2024-11-21 05:07:56.202394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.691 [2024-11-21 05:07:56.220618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.691 [2024-11-21 05:07:56.220655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:39.691 [2024-11-21 05:07:56.220667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.173 ms 00:21:39.691 [2024-11-21 05:07:56.220675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.691 [2024-11-21 05:07:56.220721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.691 [2024-11-21 05:07:56.220738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:39.691 [2024-11-21 05:07:56.220747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:39.692 [2024-11-21 05:07:56.220754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.221209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.221232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:39.692 [2024-11-21 05:07:56.221242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:21:39.692 [2024-11-21 05:07:56.221251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.221391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.221400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:39.692 [2024-11-21 05:07:56.221409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:39.692 [2024-11-21 05:07:56.221418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.227985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.228138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:39.692 [2024-11-21 05:07:56.228161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.546 ms 00:21:39.692 [2024-11-21 05:07:56.228175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.231018] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:39.692 [2024-11-21 05:07:56.231058] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:39.692 [2024-11-21 05:07:56.231070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.231079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:39.692 [2024-11-21 05:07:56.231087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:21:39.692 [2024-11-21 05:07:56.231095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.243969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.244008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:39.692 [2024-11-21 05:07:56.244021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.836 ms 00:21:39.692 [2024-11-21 05:07:56.244030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.245582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.245624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:39.692 [2024-11-21 05:07:56.245632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:21:39.692 [2024-11-21 05:07:56.245639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.246957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.246982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:39.692 [2024-11-21 05:07:56.246989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.291 ms 00:21:39.692 [2024-11-21 05:07:56.246995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.247249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.247266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:39.692 [2024-11-21 05:07:56.247273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:21:39.692 [2024-11-21 05:07:56.247279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.263577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.263639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:39.692 [2024-11-21 05:07:56.263654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.284 ms 00:21:39.692 [2024-11-21 05:07:56.263661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.269591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:39.692 [2024-11-21 05:07:56.272005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.272032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:39.692 [2024-11-21 05:07:56.272043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.304 ms 00:21:39.692 [2024-11-21 05:07:56.272054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.272107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.272115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:39.692 [2024-11-21 05:07:56.272122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:39.692 [2024-11-21 05:07:56.272128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.272210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.272218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:39.692 [2024-11-21 05:07:56.272226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:39.692 [2024-11-21 05:07:56.272234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.272250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.272257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:39.692 [2024-11-21 05:07:56.272264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:39.692 [2024-11-21 05:07:56.272269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.272302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:39.692 [2024-11-21 05:07:56.272310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.272318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:39.692 [2024-11-21 05:07:56.272325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:39.692 [2024-11-21 05:07:56.272331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.276411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.276537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:39.692 [2024-11-21 05:07:56.276551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:21:39.692 [2024-11-21 05:07:56.276558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.276633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.692 [2024-11-21 05:07:56.276642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:39.692 [2024-11-21 05:07:56.276649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:39.692 [2024-11-21 05:07:56.276656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.692 [2024-11-21 05:07:56.277605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.599 ms, result 0 00:21:40.625  [2024-11-21T05:07:58.294Z] Copying: 42/1024 [MB] (42 MBps) [2024-11-21T05:07:59.679Z] Copying: 83/1024 [MB] (41 MBps) [2024-11-21T05:08:00.625Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-21T05:08:01.568Z] Copying: 112/1024 [MB] (12 MBps) [2024-11-21T05:08:02.512Z] Copying: 125/1024 [MB] (13 MBps) [2024-11-21T05:08:03.457Z] Copying: 135/1024 [MB] (10 MBps) [2024-11-21T05:08:04.402Z] Copying: 147/1024 [MB] (11 MBps) [2024-11-21T05:08:05.346Z] Copying: 162/1024 [MB] (15 MBps) [2024-11-21T05:08:06.292Z] Copying: 179/1024 [MB] (16 MBps) [2024-11-21T05:08:07.682Z] Copying: 190/1024 [MB] (10 MBps) [2024-11-21T05:08:08.625Z] Copying: 204/1024 [MB] (14 MBps) [2024-11-21T05:08:09.567Z] Copying: 223/1024 [MB] (19 MBps) [2024-11-21T05:08:10.500Z] Copying: 240/1024 [MB] (17 MBps) [2024-11-21T05:08:11.433Z] Copying: 262/1024 [MB] (21 MBps) [2024-11-21T05:08:12.365Z] Copying: 312/1024 [MB] (50 MBps) [2024-11-21T05:08:13.300Z] Copying: 363/1024 [MB] (50 MBps) [2024-11-21T05:08:14.686Z] Copying: 401/1024 [MB] (37 MBps) [2024-11-21T05:08:15.628Z] Copying: 418/1024 [MB] (17 MBps) [2024-11-21T05:08:16.570Z] Copying: 434/1024 [MB] (16 MBps) [2024-11-21T05:08:17.514Z] Copying: 446/1024 [MB] (11 MBps) [2024-11-21T05:08:18.458Z] Copying: 467/1024 [MB] (21 MBps) [2024-11-21T05:08:19.408Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-21T05:08:20.352Z] Copying: 493/1024 [MB] (15 MBps) [2024-11-21T05:08:21.296Z] Copying: 507/1024 [MB] (13 MBps) [2024-11-21T05:08:22.684Z] Copying: 519/1024 [MB] (12 MBps) [2024-11-21T05:08:23.632Z] Copying: 531/1024 [MB] (12 MBps) [2024-11-21T05:08:24.634Z] Copying: 541/1024 [MB] (10 MBps) [2024-11-21T05:08:25.568Z] Copying: 566/1024 [MB] (25 MBps) [2024-11-21T05:08:26.505Z] Copying: 595/1024 [MB] (28 MBps) [2024-11-21T05:08:27.449Z] Copying: 647/1024 [MB] (51 MBps) [2024-11-21T05:08:28.394Z] Copying: 660/1024 [MB] (13 MBps) [2024-11-21T05:08:29.336Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-21T05:08:30.710Z] Copying: 686/1024 [MB] (15 MBps) [2024-11-21T05:08:31.644Z] Copying: 738/1024 [MB] (52 MBps) [2024-11-21T05:08:32.577Z] Copying: 791/1024 [MB] (52 MBps) [2024-11-21T05:08:33.516Z] Copying: 844/1024 [MB] (52 MBps) [2024-11-21T05:08:34.450Z] Copying: 872/1024 [MB] (28 MBps) [2024-11-21T05:08:35.393Z] Copying: 907/1024 [MB] (35 MBps) [2024-11-21T05:08:36.336Z] Copying: 929/1024 [MB] (22 MBps) [2024-11-21T05:08:37.718Z] Copying: 946/1024 [MB] (16 MBps) [2024-11-21T05:08:38.658Z] Copying: 965/1024 [MB] (18 MBps) [2024-11-21T05:08:39.599Z] Copying: 980/1024 [MB] (14 MBps) [2024-11-21T05:08:40.535Z] Copying: 990/1024 [MB] (10 MBps) [2024-11-21T05:08:40.535Z] Copying: 1023/1024 [MB] (33 MBps) [2024-11-21T05:08:40.535Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-21 05:08:40.291407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.291448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:23.801 [2024-11-21 05:08:40.291461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:23.801 [2024-11-21 05:08:40.291467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.291489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:23.801 [2024-11-21 05:08:40.292034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.292051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:23.801 [2024-11-21 05:08:40.292065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:22:23.801 [2024-11-21 05:08:40.292072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.293561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.293678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:23.801 [2024-11-21 05:08:40.293692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:22:23.801 [2024-11-21 05:08:40.293699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.306042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.306076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:23.801 [2024-11-21 05:08:40.306086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.329 ms 00:22:23.801 [2024-11-21 05:08:40.306098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.310856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.310879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:23.801 [2024-11-21 05:08:40.310888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.735 ms 00:22:23.801 [2024-11-21 05:08:40.310894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.312194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.312221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:23.801 [2024-11-21 05:08:40.312229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:22:23.801 [2024-11-21 05:08:40.312235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.315747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.315774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:23.801 [2024-11-21 05:08:40.315782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.490 ms 00:22:23.801 [2024-11-21 05:08:40.315787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.315872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.315879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:23.801 [2024-11-21 05:08:40.315886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:23.801 [2024-11-21 05:08:40.315899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.317784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.317816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:23.801 [2024-11-21 05:08:40.317822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:22:23.801 [2024-11-21 05:08:40.317827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.801 [2024-11-21 05:08:40.318974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.801 [2024-11-21 05:08:40.318997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:23.802 [2024-11-21 05:08:40.319004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:22:23.802 [2024-11-21 05:08:40.319014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.802 [2024-11-21 05:08:40.320143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.802 [2024-11-21 05:08:40.320168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:23.802 [2024-11-21 05:08:40.320175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:22:23.802 [2024-11-21 05:08:40.320180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.802 [2024-11-21 05:08:40.321207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.802 [2024-11-21 05:08:40.321308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:23.802 [2024-11-21 05:08:40.321320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:22:23.802 [2024-11-21 05:08:40.321325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.802 [2024-11-21 05:08:40.321346] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:23.802 [2024-11-21 05:08:40.321363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:23.802 [2024-11-21 05:08:40.321775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:23.803 [2024-11-21 05:08:40.321973] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:23.803 [2024-11-21 05:08:40.321979] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9c670ef-39b5-4ccb-82f5-cc919b0b9ecb 00:22:23.803 [2024-11-21 05:08:40.321986] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:23.803 [2024-11-21 05:08:40.321991] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:23.803 [2024-11-21 05:08:40.321998] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:23.803 [2024-11-21 05:08:40.322004] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:23.803 [2024-11-21 05:08:40.322014] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:23.803 [2024-11-21 05:08:40.322020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:23.803 [2024-11-21 05:08:40.322026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:23.803 [2024-11-21 05:08:40.322031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:23.803 [2024-11-21 05:08:40.322036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:23.803 [2024-11-21 05:08:40.322041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.803 [2024-11-21 05:08:40.322052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:23.803 [2024-11-21 05:08:40.322064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:22:23.803 [2024-11-21 05:08:40.322070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.323789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.803 [2024-11-21 05:08:40.323806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:23.803 [2024-11-21 05:08:40.323818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:22:23.803 [2024-11-21 05:08:40.323824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.323907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:23.803 [2024-11-21 05:08:40.323917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:23.803 [2024-11-21 05:08:40.323923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:23.803 [2024-11-21 05:08:40.323929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.329497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.329592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:23.803 [2024-11-21 05:08:40.329656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.329675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.329747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.329775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:23.803 [2024-11-21 05:08:40.329812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.329829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.329882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.329907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:23.803 [2024-11-21 05:08:40.329923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.329942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.329964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.330013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:23.803 [2024-11-21 05:08:40.330033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.330047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.340335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.340455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:23.803 [2024-11-21 05:08:40.340512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.340529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.348866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.348979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.803 [2024-11-21 05:08:40.349027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.349049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.349097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.349256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.803 [2024-11-21 05:08:40.349275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.349291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.803 [2024-11-21 05:08:40.349321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.803 [2024-11-21 05:08:40.349339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.803 [2024-11-21 05:08:40.349354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.803 [2024-11-21 05:08:40.349409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.804 [2024-11-21 05:08:40.349475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.804 [2024-11-21 05:08:40.349487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.804 [2024-11-21 05:08:40.349493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.804 [2024-11-21 05:08:40.349499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.804 [2024-11-21 05:08:40.349524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.804 [2024-11-21 05:08:40.349533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.804 [2024-11-21 05:08:40.349539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.804 [2024-11-21 05:08:40.349545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.804 [2024-11-21 05:08:40.349583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.804 [2024-11-21 05:08:40.349591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.804 [2024-11-21 05:08:40.349597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.804 [2024-11-21 05:08:40.349603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.804 [2024-11-21 05:08:40.349659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.804 [2024-11-21 05:08:40.349668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.804 [2024-11-21 05:08:40.349674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.804 [2024-11-21 05:08:40.349685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.804 [2024-11-21 05:08:40.349793] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.356 ms, result 0 00:22:24.064 00:22:24.064 00:22:24.064 05:08:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:24.064 [2024-11-21 05:08:40.734416] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:22:24.064 [2024-11-21 05:08:40.734552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89281 ] 00:22:24.325 [2024-11-21 05:08:40.896029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.325 [2024-11-21 05:08:40.931468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.588 [2024-11-21 05:08:41.078249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:24.588 [2024-11-21 05:08:41.078551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:24.588 [2024-11-21 05:08:41.242988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.243058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:24.588 [2024-11-21 05:08:41.243076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:24.588 [2024-11-21 05:08:41.243090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.243152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.243163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.588 [2024-11-21 05:08:41.243174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:24.588 [2024-11-21 05:08:41.243183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.243208] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:24.588 [2024-11-21 05:08:41.243502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:24.588 [2024-11-21 05:08:41.243523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.243533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.588 [2024-11-21 05:08:41.243552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:22:24.588 [2024-11-21 05:08:41.243562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.245847] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:24.588 [2024-11-21 05:08:41.250640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.250690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:24.588 [2024-11-21 05:08:41.250713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.795 ms 00:22:24.588 [2024-11-21 05:08:41.250731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.250814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.250829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:24.588 [2024-11-21 05:08:41.250839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:24.588 [2024-11-21 05:08:41.250848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.262480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.262530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.588 [2024-11-21 05:08:41.262551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.568 ms 00:22:24.588 [2024-11-21 05:08:41.262565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.262695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.262707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.588 [2024-11-21 05:08:41.262717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:24.588 [2024-11-21 05:08:41.262728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.262791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.262801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:24.588 [2024-11-21 05:08:41.262811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:24.588 [2024-11-21 05:08:41.262819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.262849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.588 [2024-11-21 05:08:41.265525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.265743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.588 [2024-11-21 05:08:41.265763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:22:24.588 [2024-11-21 05:08:41.265771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.265816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.588 [2024-11-21 05:08:41.265826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:24.588 [2024-11-21 05:08:41.265836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:24.588 [2024-11-21 05:08:41.265852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.588 [2024-11-21 05:08:41.265881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:24.588 [2024-11-21 05:08:41.265909] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:24.588 [2024-11-21 05:08:41.265955] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:24.588 [2024-11-21 05:08:41.265978] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:24.588 [2024-11-21 05:08:41.266093] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:24.588 [2024-11-21 05:08:41.266106] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:24.588 [2024-11-21 05:08:41.266122] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:24.588 [2024-11-21 05:08:41.266137] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:24.588 [2024-11-21 05:08:41.266146] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266159] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:24.589 [2024-11-21 05:08:41.266168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:24.589 [2024-11-21 05:08:41.266177] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:24.589 [2024-11-21 05:08:41.266185] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:24.589 [2024-11-21 05:08:41.266197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.589 [2024-11-21 05:08:41.266206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:24.589 [2024-11-21 05:08:41.266217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:24.589 [2024-11-21 05:08:41.266226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.589 [2024-11-21 05:08:41.266310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.589 [2024-11-21 05:08:41.266322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:24.589 [2024-11-21 05:08:41.266337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:24.589 [2024-11-21 05:08:41.266345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.589 [2024-11-21 05:08:41.266457] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:24.589 [2024-11-21 05:08:41.266473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:24.589 [2024-11-21 05:08:41.266483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:24.589 [2024-11-21 05:08:41.266516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:24.589 [2024-11-21 05:08:41.266542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.589 [2024-11-21 05:08:41.266562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:24.589 [2024-11-21 05:08:41.266570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:24.589 [2024-11-21 05:08:41.266577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.589 [2024-11-21 05:08:41.266585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:24.589 [2024-11-21 05:08:41.266597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:24.589 [2024-11-21 05:08:41.266606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:24.589 [2024-11-21 05:08:41.266639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:24.589 [2024-11-21 05:08:41.266664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:24.589 [2024-11-21 05:08:41.266688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:24.589 [2024-11-21 05:08:41.266716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:24.589 [2024-11-21 05:08:41.266738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:24.589 [2024-11-21 05:08:41.266761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.589 [2024-11-21 05:08:41.266776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:24.589 [2024-11-21 05:08:41.266783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:24.589 [2024-11-21 05:08:41.266790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.589 [2024-11-21 05:08:41.266797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:24.589 [2024-11-21 05:08:41.266805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:24.589 [2024-11-21 05:08:41.266812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:24.589 [2024-11-21 05:08:41.266827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:24.589 [2024-11-21 05:08:41.266837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:24.589 [2024-11-21 05:08:41.266854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:24.589 [2024-11-21 05:08:41.266866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.589 [2024-11-21 05:08:41.266885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:24.589 [2024-11-21 05:08:41.266893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:24.589 [2024-11-21 05:08:41.266900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:24.589 [2024-11-21 05:08:41.266907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:24.589 [2024-11-21 05:08:41.266914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:24.589 [2024-11-21 05:08:41.266921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:24.589 [2024-11-21 05:08:41.266930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:24.589 [2024-11-21 05:08:41.266941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.589 [2024-11-21 05:08:41.266949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:24.589 [2024-11-21 05:08:41.266957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:24.589 [2024-11-21 05:08:41.266964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:24.589 [2024-11-21 05:08:41.266975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:24.589 [2024-11-21 05:08:41.266983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:24.589 [2024-11-21 05:08:41.266990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:24.589 [2024-11-21 05:08:41.266997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:24.589 [2024-11-21 05:08:41.267005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:24.589 [2024-11-21 05:08:41.267012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:24.589 [2024-11-21 05:08:41.267020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:24.589 [2024-11-21 05:08:41.267026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:24.589 [2024-11-21 05:08:41.267034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:24.589 [2024-11-21 05:08:41.267041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:24.589 [2024-11-21 05:08:41.267048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:24.589 [2024-11-21 05:08:41.267056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:24.590 [2024-11-21 05:08:41.267065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.590 [2024-11-21 05:08:41.267074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:24.590 [2024-11-21 05:08:41.267082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:24.590 [2024-11-21 05:08:41.267089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:24.590 [2024-11-21 05:08:41.267100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:24.590 [2024-11-21 05:08:41.267107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.590 [2024-11-21 05:08:41.267115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:24.590 [2024-11-21 05:08:41.267123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:22:24.590 [2024-11-21 05:08:41.267132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.590 [2024-11-21 05:08:41.287404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.590 [2024-11-21 05:08:41.287457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:24.590 [2024-11-21 05:08:41.287471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.194 ms 00:22:24.590 [2024-11-21 05:08:41.287480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.590 [2024-11-21 05:08:41.287579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.590 [2024-11-21 05:08:41.287589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:24.590 [2024-11-21 05:08:41.287599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:24.590 [2024-11-21 05:08:41.287607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.590 [2024-11-21 05:08:41.311160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.590 [2024-11-21 05:08:41.311233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:24.590 [2024-11-21 05:08:41.311251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.469 ms 00:22:24.590 [2024-11-21 05:08:41.311264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.590 [2024-11-21 05:08:41.311327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.590 [2024-11-21 05:08:41.311342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:24.590 [2024-11-21 05:08:41.311356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:24.590 [2024-11-21 05:08:41.311382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.590 [2024-11-21 05:08:41.312189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.590 [2024-11-21 05:08:41.312237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:24.590 [2024-11-21 05:08:41.312253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:22:24.590 [2024-11-21 05:08:41.312265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.590 [2024-11-21 05:08:41.312470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.590 [2024-11-21 05:08:41.312483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:24.590 [2024-11-21 05:08:41.312496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:22:24.590 [2024-11-21 05:08:41.312506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.324000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.324213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:24.852 [2024-11-21 05:08:41.324246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.465 ms 00:22:24.852 [2024-11-21 05:08:41.324260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.329182] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:24.852 [2024-11-21 05:08:41.329232] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:24.852 [2024-11-21 05:08:41.329246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.329255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:24.852 [2024-11-21 05:08:41.329265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.859 ms 00:22:24.852 [2024-11-21 05:08:41.329273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.345198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.345258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:24.852 [2024-11-21 05:08:41.345277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.861 ms 00:22:24.852 [2024-11-21 05:08:41.345286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.348210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.348257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:24.852 [2024-11-21 05:08:41.348268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:22:24.852 [2024-11-21 05:08:41.348276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.351024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.351199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:24.852 [2024-11-21 05:08:41.351217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:22:24.852 [2024-11-21 05:08:41.351236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.351586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.351600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:24.852 [2024-11-21 05:08:41.351637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:22:24.852 [2024-11-21 05:08:41.351646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.381034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.381131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:24.852 [2024-11-21 05:08:41.381176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.356 ms 00:22:24.852 [2024-11-21 05:08:41.381187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.389585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:24.852 [2024-11-21 05:08:41.393534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.393589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:24.852 [2024-11-21 05:08:41.393604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:22:24.852 [2024-11-21 05:08:41.393626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.393720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.393732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:24.852 [2024-11-21 05:08:41.393742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:24.852 [2024-11-21 05:08:41.393753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.393846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.393857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:24.852 [2024-11-21 05:08:41.393870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:24.852 [2024-11-21 05:08:41.393879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.393902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.393911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:24.852 [2024-11-21 05:08:41.393930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:24.852 [2024-11-21 05:08:41.393939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.393988] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:24.852 [2024-11-21 05:08:41.393999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.394009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:24.852 [2024-11-21 05:08:41.394018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:24.852 [2024-11-21 05:08:41.394030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.852 [2024-11-21 05:08:41.400754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.852 [2024-11-21 05:08:41.400812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:24.852 [2024-11-21 05:08:41.400825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.700 ms 00:22:24.852 [2024-11-21 05:08:41.400834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.853 [2024-11-21 05:08:41.400941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.853 [2024-11-21 05:08:41.400956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:24.853 [2024-11-21 05:08:41.400970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:24.853 [2024-11-21 05:08:41.400979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.853 [2024-11-21 05:08:41.402479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.885 ms, result 0 00:22:26.238  [2024-11-21T05:08:43.919Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-21T05:08:44.865Z] Copying: 48/1024 [MB] (22 MBps) [2024-11-21T05:08:45.811Z] Copying: 63/1024 [MB] (15 MBps) [2024-11-21T05:08:46.756Z] Copying: 74/1024 [MB] (10 MBps) [2024-11-21T05:08:47.703Z] Copying: 88/1024 [MB] (14 MBps) [2024-11-21T05:08:48.647Z] Copying: 98/1024 [MB] (10 MBps) [2024-11-21T05:08:49.592Z] Copying: 109/1024 [MB] (10 MBps) [2024-11-21T05:08:50.975Z] Copying: 126/1024 [MB] (17 MBps) [2024-11-21T05:08:52.013Z] Copying: 138/1024 [MB] (12 MBps) [2024-11-21T05:08:52.596Z] Copying: 152/1024 [MB] (13 MBps) [2024-11-21T05:08:53.982Z] Copying: 174/1024 [MB] (21 MBps) [2024-11-21T05:08:54.924Z] Copying: 186/1024 [MB] (12 MBps) [2024-11-21T05:08:55.867Z] Copying: 198/1024 [MB] (12 MBps) [2024-11-21T05:08:56.813Z] Copying: 209/1024 [MB] (10 MBps) [2024-11-21T05:08:57.758Z] Copying: 221/1024 [MB] (12 MBps) [2024-11-21T05:08:58.704Z] Copying: 239/1024 [MB] (17 MBps) [2024-11-21T05:08:59.650Z] Copying: 252/1024 [MB] (13 MBps) [2024-11-21T05:09:00.595Z] Copying: 267/1024 [MB] (14 MBps) [2024-11-21T05:09:01.983Z] Copying: 287/1024 [MB] (20 MBps) [2024-11-21T05:09:02.928Z] Copying: 305/1024 [MB] (17 MBps) [2024-11-21T05:09:03.870Z] Copying: 322/1024 [MB] (16 MBps) [2024-11-21T05:09:04.813Z] Copying: 340/1024 [MB] (18 MBps) [2024-11-21T05:09:05.756Z] Copying: 356/1024 [MB] (16 MBps) [2024-11-21T05:09:06.699Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-21T05:09:07.640Z] Copying: 383/1024 [MB] (15 MBps) [2024-11-21T05:09:09.029Z] Copying: 395/1024 [MB] (12 MBps) [2024-11-21T05:09:09.602Z] Copying: 414/1024 [MB] (18 MBps) [2024-11-21T05:09:10.992Z] Copying: 438/1024 [MB] (24 MBps) [2024-11-21T05:09:11.936Z] Copying: 455/1024 [MB] (17 MBps) [2024-11-21T05:09:12.881Z] Copying: 470/1024 [MB] (14 MBps) [2024-11-21T05:09:13.827Z] Copying: 481/1024 [MB] (11 MBps) [2024-11-21T05:09:14.770Z] Copying: 493/1024 [MB] (12 MBps) [2024-11-21T05:09:15.713Z] Copying: 506/1024 [MB] (12 MBps) [2024-11-21T05:09:16.657Z] Copying: 517/1024 [MB] (11 MBps) [2024-11-21T05:09:17.602Z] Copying: 537/1024 [MB] (20 MBps) [2024-11-21T05:09:18.992Z] Copying: 555/1024 [MB] (17 MBps) [2024-11-21T05:09:19.938Z] Copying: 573/1024 [MB] (18 MBps) [2024-11-21T05:09:20.882Z] Copying: 590/1024 [MB] (17 MBps) [2024-11-21T05:09:21.875Z] Copying: 612/1024 [MB] (22 MBps) [2024-11-21T05:09:22.821Z] Copying: 635/1024 [MB] (22 MBps) [2024-11-21T05:09:23.767Z] Copying: 658/1024 [MB] (22 MBps) [2024-11-21T05:09:24.707Z] Copying: 679/1024 [MB] (20 MBps) [2024-11-21T05:09:25.649Z] Copying: 699/1024 [MB] (20 MBps) [2024-11-21T05:09:26.592Z] Copying: 724/1024 [MB] (25 MBps) [2024-11-21T05:09:27.979Z] Copying: 743/1024 [MB] (19 MBps) [2024-11-21T05:09:28.922Z] Copying: 759/1024 [MB] (15 MBps) [2024-11-21T05:09:29.865Z] Copying: 775/1024 [MB] (15 MBps) [2024-11-21T05:09:30.810Z] Copying: 790/1024 [MB] (15 MBps) [2024-11-21T05:09:31.754Z] Copying: 803/1024 [MB] (12 MBps) [2024-11-21T05:09:32.700Z] Copying: 814/1024 [MB] (10 MBps) [2024-11-21T05:09:33.643Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-21T05:09:35.027Z] Copying: 834/1024 [MB] (10 MBps) [2024-11-21T05:09:35.599Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-21T05:09:36.982Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-21T05:09:37.921Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-21T05:09:38.860Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-21T05:09:39.803Z] Copying: 894/1024 [MB] (18 MBps) [2024-11-21T05:09:40.749Z] Copying: 910/1024 [MB] (15 MBps) [2024-11-21T05:09:41.693Z] Copying: 921/1024 [MB] (10 MBps) [2024-11-21T05:09:42.636Z] Copying: 937/1024 [MB] (16 MBps) [2024-11-21T05:09:44.025Z] Copying: 956/1024 [MB] (18 MBps) [2024-11-21T05:09:44.595Z] Copying: 974/1024 [MB] (18 MBps) [2024-11-21T05:09:45.979Z] Copying: 988/1024 [MB] (13 MBps) [2024-11-21T05:09:46.924Z] Copying: 1004/1024 [MB] (15 MBps) [2024-11-21T05:09:47.496Z] Copying: 1014/1024 [MB] (10 MBps) [2024-11-21T05:09:47.758Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-21 05:09:47.623216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.623458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:31.024 [2024-11-21 05:09:47.623481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:31.024 [2024-11-21 05:09:47.623498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.623527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.024 [2024-11-21 05:09:47.624104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.624127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:31.024 [2024-11-21 05:09:47.624138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:23:31.024 [2024-11-21 05:09:47.624146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.624373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.624385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:31.024 [2024-11-21 05:09:47.624394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:23:31.024 [2024-11-21 05:09:47.624403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.627869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.627887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:31.024 [2024-11-21 05:09:47.627896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.449 ms 00:23:31.024 [2024-11-21 05:09:47.627905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.634099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.634261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:31.024 [2024-11-21 05:09:47.634278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.179 ms 00:23:31.024 [2024-11-21 05:09:47.634286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.637175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.637209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:31.024 [2024-11-21 05:09:47.637219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.818 ms 00:23:31.024 [2024-11-21 05:09:47.637226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.643696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.643833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:31.024 [2024-11-21 05:09:47.643891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.429 ms 00:23:31.024 [2024-11-21 05:09:47.643916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.644062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.644231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:31.024 [2024-11-21 05:09:47.644253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:31.024 [2024-11-21 05:09:47.644261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.647571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.647625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:31.024 [2024-11-21 05:09:47.647635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:23:31.024 [2024-11-21 05:09:47.647642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.650245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.650398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:31.024 [2024-11-21 05:09:47.650414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:23:31.024 [2024-11-21 05:09:47.650422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.652371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.652398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:31.024 [2024-11-21 05:09:47.652407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.916 ms 00:23:31.024 [2024-11-21 05:09:47.652414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.654397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.024 [2024-11-21 05:09:47.654522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:31.024 [2024-11-21 05:09:47.654576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.915 ms 00:23:31.024 [2024-11-21 05:09:47.654597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.024 [2024-11-21 05:09:47.654754] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:31.024 [2024-11-21 05:09:47.654933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:31.024 [2024-11-21 05:09:47.654969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:31.024 [2024-11-21 05:09:47.654999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:31.024 [2024-11-21 05:09:47.655027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:31.024 [2024-11-21 05:09:47.655190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:31.024 [2024-11-21 05:09:47.655238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:31.024 [2024-11-21 05:09:47.655267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:31.024 [2024-11-21 05:09:47.655295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.655990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.656972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.657987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:31.025 [2024-11-21 05:09:47.658832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:31.026 [2024-11-21 05:09:47.658839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:31.026 [2024-11-21 05:09:47.658847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:31.026 [2024-11-21 05:09:47.658854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:31.026 [2024-11-21 05:09:47.658873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:31.026 [2024-11-21 05:09:47.658891] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9c670ef-39b5-4ccb-82f5-cc919b0b9ecb 00:23:31.026 [2024-11-21 05:09:47.658899] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:31.026 [2024-11-21 05:09:47.658907] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:31.026 [2024-11-21 05:09:47.658915] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:31.026 [2024-11-21 05:09:47.658924] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:31.026 [2024-11-21 05:09:47.658931] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:31.026 [2024-11-21 05:09:47.658940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:31.026 [2024-11-21 05:09:47.658947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:31.026 [2024-11-21 05:09:47.658953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:31.026 [2024-11-21 05:09:47.658960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:31.026 [2024-11-21 05:09:47.658974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.026 [2024-11-21 05:09:47.658993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:31.026 [2024-11-21 05:09:47.659010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.224 ms 00:23:31.026 [2024-11-21 05:09:47.659018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.662096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.026 [2024-11-21 05:09:47.662126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:31.026 [2024-11-21 05:09:47.662136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:23:31.026 [2024-11-21 05:09:47.662144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.662264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.026 [2024-11-21 05:09:47.662273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:31.026 [2024-11-21 05:09:47.662282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:31.026 [2024-11-21 05:09:47.662295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.669182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.669216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.026 [2024-11-21 05:09:47.669226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.669235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.669304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.669313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.026 [2024-11-21 05:09:47.669331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.669339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.669400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.669410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.026 [2024-11-21 05:09:47.669418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.669426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.669441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.669454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.026 [2024-11-21 05:09:47.669462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.669473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.682719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.682764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.026 [2024-11-21 05:09:47.682776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.682784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.692597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.692718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.026 [2024-11-21 05:09:47.692729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.692737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.692795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.692806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.026 [2024-11-21 05:09:47.692814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.692827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.692865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.692874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.026 [2024-11-21 05:09:47.692885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.692893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.692967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.692978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.026 [2024-11-21 05:09:47.692987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.692995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.693022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.693032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:31.026 [2024-11-21 05:09:47.693044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.693055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.693096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.693105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.026 [2024-11-21 05:09:47.693113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.693121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.693180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.026 [2024-11-21 05:09:47.693191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.026 [2024-11-21 05:09:47.693203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.026 [2024-11-21 05:09:47.693212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.026 [2024-11-21 05:09:47.693346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.097 ms, result 0 00:23:31.285 00:23:31.285 00:23:31.285 05:09:47 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:33.878 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:33.878 05:09:50 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:33.878 [2024-11-21 05:09:50.273525] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:23:33.878 [2024-11-21 05:09:50.273709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90003 ] 00:23:33.878 [2024-11-21 05:09:50.428704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.878 [2024-11-21 05:09:50.457384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.878 [2024-11-21 05:09:50.572082] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:33.878 [2024-11-21 05:09:50.572169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.149 [2024-11-21 05:09:50.733889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.734199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:34.149 [2024-11-21 05:09:50.734224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:34.149 [2024-11-21 05:09:50.734234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.734310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.734321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.149 [2024-11-21 05:09:50.734330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:34.149 [2024-11-21 05:09:50.734338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.734367] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:34.149 [2024-11-21 05:09:50.734659] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:34.149 [2024-11-21 05:09:50.734682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.734692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.149 [2024-11-21 05:09:50.734708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:23:34.149 [2024-11-21 05:09:50.734719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.736383] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:34.149 [2024-11-21 05:09:50.740144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.740197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:34.149 [2024-11-21 05:09:50.740209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:23:34.149 [2024-11-21 05:09:50.740224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.740298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.740312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:34.149 [2024-11-21 05:09:50.740321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:34.149 [2024-11-21 05:09:50.740329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.748363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.748587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.149 [2024-11-21 05:09:50.748629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.992 ms 00:23:34.149 [2024-11-21 05:09:50.748638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.748740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.748750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.149 [2024-11-21 05:09:50.748759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:34.149 [2024-11-21 05:09:50.748774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.748838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.748856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:34.149 [2024-11-21 05:09:50.748865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:34.149 [2024-11-21 05:09:50.748876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.748898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:34.149 [2024-11-21 05:09:50.750892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.750930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.149 [2024-11-21 05:09:50.750942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:23:34.149 [2024-11-21 05:09:50.750959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.750995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.149 [2024-11-21 05:09:50.751004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:34.149 [2024-11-21 05:09:50.751013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:34.149 [2024-11-21 05:09:50.751028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.149 [2024-11-21 05:09:50.751054] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:34.149 [2024-11-21 05:09:50.751081] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:34.150 [2024-11-21 05:09:50.751122] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:34.150 [2024-11-21 05:09:50.751143] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:34.150 [2024-11-21 05:09:50.751252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:34.150 [2024-11-21 05:09:50.751266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:34.150 [2024-11-21 05:09:50.751282] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:34.150 [2024-11-21 05:09:50.751293] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:34.150 [2024-11-21 05:09:50.751303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:34.150 [2024-11-21 05:09:50.751312] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:34.150 [2024-11-21 05:09:50.751324] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:34.150 [2024-11-21 05:09:50.751333] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:34.150 [2024-11-21 05:09:50.751340] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:34.150 [2024-11-21 05:09:50.751350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.150 [2024-11-21 05:09:50.751358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:34.150 [2024-11-21 05:09:50.751367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:23:34.150 [2024-11-21 05:09:50.751379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.150 [2024-11-21 05:09:50.751464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.150 [2024-11-21 05:09:50.751474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:34.150 [2024-11-21 05:09:50.751482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:34.150 [2024-11-21 05:09:50.751489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.150 [2024-11-21 05:09:50.751596] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:34.150 [2024-11-21 05:09:50.751820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:34.150 [2024-11-21 05:09:50.751874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.150 [2024-11-21 05:09:50.751900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.150 [2024-11-21 05:09:50.751924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:34.150 [2024-11-21 05:09:50.751955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:34.150 [2024-11-21 05:09:50.751976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:34.150 [2024-11-21 05:09:50.751999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:34.150 [2024-11-21 05:09:50.752021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:34.150 [2024-11-21 05:09:50.752045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.150 [2024-11-21 05:09:50.752520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:34.150 [2024-11-21 05:09:50.752739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:34.150 [2024-11-21 05:09:50.752780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.150 [2024-11-21 05:09:50.752810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:34.150 [2024-11-21 05:09:50.752838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:34.150 [2024-11-21 05:09:50.752864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.150 [2024-11-21 05:09:50.752891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:34.150 [2024-11-21 05:09:50.752916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:34.150 [2024-11-21 05:09:50.752943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.150 [2024-11-21 05:09:50.752969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:34.150 [2024-11-21 05:09:50.752994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.150 [2024-11-21 05:09:50.753044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:34.150 [2024-11-21 05:09:50.753067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.150 [2024-11-21 05:09:50.753108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:34.150 [2024-11-21 05:09:50.753143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.150 [2024-11-21 05:09:50.753223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:34.150 [2024-11-21 05:09:50.753244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.150 [2024-11-21 05:09:50.753290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:34.150 [2024-11-21 05:09:50.753312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.150 [2024-11-21 05:09:50.753355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:34.150 [2024-11-21 05:09:50.753376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:34.150 [2024-11-21 05:09:50.753398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.150 [2024-11-21 05:09:50.753420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:34.150 [2024-11-21 05:09:50.753443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:34.150 [2024-11-21 05:09:50.753464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:34.150 [2024-11-21 05:09:50.753506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:34.150 [2024-11-21 05:09:50.753532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753555] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:34.150 [2024-11-21 05:09:50.753589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:34.150 [2024-11-21 05:09:50.753643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.150 [2024-11-21 05:09:50.753667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.150 [2024-11-21 05:09:50.753694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:34.150 [2024-11-21 05:09:50.753715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:34.150 [2024-11-21 05:09:50.753739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:34.150 [2024-11-21 05:09:50.753760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:34.150 [2024-11-21 05:09:50.753782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:34.150 [2024-11-21 05:09:50.753803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:34.150 [2024-11-21 05:09:50.753830] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:34.150 [2024-11-21 05:09:50.753862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.151 [2024-11-21 05:09:50.753907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:34.151 [2024-11-21 05:09:50.753932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:34.151 [2024-11-21 05:09:50.753955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:34.151 [2024-11-21 05:09:50.753984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:34.151 [2024-11-21 05:09:50.754008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:34.151 [2024-11-21 05:09:50.754031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:34.151 [2024-11-21 05:09:50.754055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:34.151 [2024-11-21 05:09:50.754078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:34.151 [2024-11-21 05:09:50.754102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:34.151 [2024-11-21 05:09:50.754128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:34.151 [2024-11-21 05:09:50.754152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:34.151 [2024-11-21 05:09:50.754175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:34.151 [2024-11-21 05:09:50.754199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:34.151 [2024-11-21 05:09:50.754223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:34.151 [2024-11-21 05:09:50.754248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:34.151 [2024-11-21 05:09:50.754275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.151 [2024-11-21 05:09:50.754306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:34.151 [2024-11-21 05:09:50.754330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:34.151 [2024-11-21 05:09:50.754353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:34.151 [2024-11-21 05:09:50.754383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:34.151 [2024-11-21 05:09:50.754414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.754450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:34.151 [2024-11-21 05:09:50.754484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:23:34.151 [2024-11-21 05:09:50.754513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.771481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.771703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.151 [2024-11-21 05:09:50.771726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.693 ms 00:23:34.151 [2024-11-21 05:09:50.771736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.771831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.771851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:34.151 [2024-11-21 05:09:50.771861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:34.151 [2024-11-21 05:09:50.771868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.798500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.798572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.151 [2024-11-21 05:09:50.798593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.564 ms 00:23:34.151 [2024-11-21 05:09:50.798649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.798715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.798730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.151 [2024-11-21 05:09:50.798755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.151 [2024-11-21 05:09:50.798773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.799379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.799447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.151 [2024-11-21 05:09:50.799465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:23:34.151 [2024-11-21 05:09:50.799479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.799722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.799741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.151 [2024-11-21 05:09:50.799754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:23:34.151 [2024-11-21 05:09:50.799768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.807994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.808040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.151 [2024-11-21 05:09:50.808062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.173 ms 00:23:34.151 [2024-11-21 05:09:50.808070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.812020] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:34.151 [2024-11-21 05:09:50.812074] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:34.151 [2024-11-21 05:09:50.812094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.812103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:34.151 [2024-11-21 05:09:50.812113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.932 ms 00:23:34.151 [2024-11-21 05:09:50.812121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.828744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.828794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:34.151 [2024-11-21 05:09:50.828807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.561 ms 00:23:34.151 [2024-11-21 05:09:50.828817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.831881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.831931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:34.151 [2024-11-21 05:09:50.831942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:23:34.151 [2024-11-21 05:09:50.831950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.834809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.835018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:34.151 [2024-11-21 05:09:50.835036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:23:34.151 [2024-11-21 05:09:50.835044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.835382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.835397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:34.151 [2024-11-21 05:09:50.835407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:23:34.151 [2024-11-21 05:09:50.835419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.864042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.864096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:34.151 [2024-11-21 05:09:50.864109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.600 ms 00:23:34.151 [2024-11-21 05:09:50.864118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.872326] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:34.151 [2024-11-21 05:09:50.875685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.875726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:34.151 [2024-11-21 05:09:50.875745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.514 ms 00:23:34.151 [2024-11-21 05:09:50.875754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.875832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.875849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:34.151 [2024-11-21 05:09:50.875862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:34.151 [2024-11-21 05:09:50.875871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.151 [2024-11-21 05:09:50.875940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.151 [2024-11-21 05:09:50.875956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:34.151 [2024-11-21 05:09:50.875966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:34.152 [2024-11-21 05:09:50.875974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.152 [2024-11-21 05:09:50.875994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.152 [2024-11-21 05:09:50.876003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:34.152 [2024-11-21 05:09:50.876012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.152 [2024-11-21 05:09:50.876023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.152 [2024-11-21 05:09:50.876057] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:34.152 [2024-11-21 05:09:50.876069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.152 [2024-11-21 05:09:50.876077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:34.152 [2024-11-21 05:09:50.876089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:34.152 [2024-11-21 05:09:50.876097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.413 [2024-11-21 05:09:50.881586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.413 [2024-11-21 05:09:50.881671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:34.413 [2024-11-21 05:09:50.881684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.469 ms 00:23:34.413 [2024-11-21 05:09:50.881693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.413 [2024-11-21 05:09:50.881795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.413 [2024-11-21 05:09:50.881809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:34.413 [2024-11-21 05:09:50.881818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:34.413 [2024-11-21 05:09:50.881830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.413 [2024-11-21 05:09:50.883068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.717 ms, result 0 00:23:35.353  [2024-11-21T05:09:53.034Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-21T05:09:53.977Z] Copying: 36/1024 [MB] (17 MBps) [2024-11-21T05:09:54.919Z] Copying: 47/1024 [MB] (11 MBps) [2024-11-21T05:09:56.305Z] Copying: 65/1024 [MB] (17 MBps) [2024-11-21T05:09:57.249Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-21T05:09:58.195Z] Copying: 87/1024 [MB] (10 MBps) [2024-11-21T05:09:59.141Z] Copying: 97/1024 [MB] (10 MBps) [2024-11-21T05:10:00.084Z] Copying: 108/1024 [MB] (10 MBps) [2024-11-21T05:10:01.029Z] Copying: 119/1024 [MB] (11 MBps) [2024-11-21T05:10:01.972Z] Copying: 131/1024 [MB] (11 MBps) [2024-11-21T05:10:02.916Z] Copying: 142/1024 [MB] (11 MBps) [2024-11-21T05:10:04.305Z] Copying: 154/1024 [MB] (11 MBps) [2024-11-21T05:10:05.249Z] Copying: 165/1024 [MB] (11 MBps) [2024-11-21T05:10:06.194Z] Copying: 177/1024 [MB] (11 MBps) [2024-11-21T05:10:07.139Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-21T05:10:08.084Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-21T05:10:09.028Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-21T05:10:09.970Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-21T05:10:10.913Z] Copying: 232/1024 [MB] (10 MBps) [2024-11-21T05:10:12.300Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-21T05:10:13.244Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-21T05:10:14.188Z] Copying: 266/1024 [MB] (11 MBps) [2024-11-21T05:10:15.131Z] Copying: 277/1024 [MB] (11 MBps) [2024-11-21T05:10:16.073Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-21T05:10:17.016Z] Copying: 299/1024 [MB] (11 MBps) [2024-11-21T05:10:17.960Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-21T05:10:18.906Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-21T05:10:19.921Z] Copying: 333/1024 [MB] (11 MBps) [2024-11-21T05:10:21.308Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-21T05:10:22.251Z] Copying: 356/1024 [MB] (11 MBps) [2024-11-21T05:10:23.195Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-21T05:10:24.140Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-21T05:10:25.084Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-21T05:10:26.029Z] Copying: 400/1024 [MB] (11 MBps) [2024-11-21T05:10:26.973Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-21T05:10:27.916Z] Copying: 423/1024 [MB] (11 MBps) [2024-11-21T05:10:29.304Z] Copying: 434/1024 [MB] (11 MBps) [2024-11-21T05:10:30.245Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-21T05:10:31.189Z] Copying: 457/1024 [MB] (11 MBps) [2024-11-21T05:10:32.133Z] Copying: 468/1024 [MB] (11 MBps) [2024-11-21T05:10:33.076Z] Copying: 480/1024 [MB] (11 MBps) [2024-11-21T05:10:34.019Z] Copying: 491/1024 [MB] (11 MBps) [2024-11-21T05:10:34.962Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-21T05:10:35.904Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-21T05:10:37.290Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-21T05:10:38.234Z] Copying: 537/1024 [MB] (12 MBps) [2024-11-21T05:10:39.179Z] Copying: 552/1024 [MB] (15 MBps) [2024-11-21T05:10:40.122Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-21T05:10:41.067Z] Copying: 576/1024 [MB] (11 MBps) [2024-11-21T05:10:42.011Z] Copying: 587/1024 [MB] (11 MBps) [2024-11-21T05:10:42.955Z] Copying: 598/1024 [MB] (11 MBps) [2024-11-21T05:10:43.900Z] Copying: 610/1024 [MB] (11 MBps) [2024-11-21T05:10:45.287Z] Copying: 621/1024 [MB] (11 MBps) [2024-11-21T05:10:46.233Z] Copying: 633/1024 [MB] (11 MBps) [2024-11-21T05:10:47.177Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-21T05:10:48.153Z] Copying: 655/1024 [MB] (11 MBps) [2024-11-21T05:10:49.102Z] Copying: 665/1024 [MB] (10 MBps) [2024-11-21T05:10:50.050Z] Copying: 692196/1048576 [kB] (10228 kBps) [2024-11-21T05:10:50.996Z] Copying: 695/1024 [MB] (19 MBps) [2024-11-21T05:10:51.942Z] Copying: 708/1024 [MB] (12 MBps) [2024-11-21T05:10:53.331Z] Copying: 721/1024 [MB] (13 MBps) [2024-11-21T05:10:53.903Z] Copying: 737/1024 [MB] (15 MBps) [2024-11-21T05:10:55.276Z] Copying: 749/1024 [MB] (12 MBps) [2024-11-21T05:10:56.210Z] Copying: 802/1024 [MB] (52 MBps) [2024-11-21T05:10:57.141Z] Copying: 855/1024 [MB] (53 MBps) [2024-11-21T05:10:58.073Z] Copying: 909/1024 [MB] (53 MBps) [2024-11-21T05:10:59.014Z] Copying: 962/1024 [MB] (53 MBps) [2024-11-21T05:10:59.956Z] Copying: 996/1024 [MB] (33 MBps) [2024-11-21T05:11:01.340Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-21T05:11:01.912Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-21T05:11:01.912Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-21 05:11:01.699198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.178 [2024-11-21 05:11:01.699297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:45.178 [2024-11-21 05:11:01.699317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:45.178 [2024-11-21 05:11:01.699330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.178 [2024-11-21 05:11:01.701705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:45.178 [2024-11-21 05:11:01.705573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.178 [2024-11-21 05:11:01.705764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:45.178 [2024-11-21 05:11:01.705797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:24:45.178 [2024-11-21 05:11:01.705809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.178 [2024-11-21 05:11:01.716524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.178 [2024-11-21 05:11:01.716595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:45.178 [2024-11-21 05:11:01.716624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.132 ms 00:24:45.178 [2024-11-21 05:11:01.716635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.178 [2024-11-21 05:11:01.742223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.178 [2024-11-21 05:11:01.742272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:45.178 [2024-11-21 05:11:01.742286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.566 ms 00:24:45.178 [2024-11-21 05:11:01.742295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.178 [2024-11-21 05:11:01.748498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.178 [2024-11-21 05:11:01.748680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:45.178 [2024-11-21 05:11:01.748710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:24:45.178 [2024-11-21 05:11:01.748721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.178 [2024-11-21 05:11:01.751738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.178 [2024-11-21 05:11:01.751892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:45.178 [2024-11-21 05:11:01.751911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.923 ms 00:24:45.178 [2024-11-21 05:11:01.751921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.178 [2024-11-21 05:11:01.756762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.178 [2024-11-21 05:11:01.756810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:45.178 [2024-11-21 05:11:01.756821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:24:45.178 [2024-11-21 05:11:01.756843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.440 [2024-11-21 05:11:01.910356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.440 [2024-11-21 05:11:01.910408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:45.440 [2024-11-21 05:11:01.910421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 153.466 ms 00:24:45.440 [2024-11-21 05:11:01.910430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.440 [2024-11-21 05:11:01.913138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.440 [2024-11-21 05:11:01.913208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:45.440 [2024-11-21 05:11:01.913220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.678 ms 00:24:45.440 [2024-11-21 05:11:01.913228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.440 [2024-11-21 05:11:01.915166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.440 [2024-11-21 05:11:01.915213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:45.440 [2024-11-21 05:11:01.915224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.898 ms 00:24:45.440 [2024-11-21 05:11:01.915232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.440 [2024-11-21 05:11:01.916749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.440 [2024-11-21 05:11:01.916790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:45.441 [2024-11-21 05:11:01.916800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:24:45.441 [2024-11-21 05:11:01.916807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.441 [2024-11-21 05:11:01.918416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.441 [2024-11-21 05:11:01.918462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:45.441 [2024-11-21 05:11:01.918472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:24:45.441 [2024-11-21 05:11:01.918480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.441 [2024-11-21 05:11:01.918516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:45.441 [2024-11-21 05:11:01.918533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106752 / 261120 wr_cnt: 1 state: open 00:24:45.441 [2024-11-21 05:11:01.918545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.918993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:45.441 [2024-11-21 05:11:01.919148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:45.442 [2024-11-21 05:11:01.919381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:45.442 [2024-11-21 05:11:01.919392] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9c670ef-39b5-4ccb-82f5-cc919b0b9ecb 00:24:45.442 [2024-11-21 05:11:01.919401] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106752 00:24:45.442 [2024-11-21 05:11:01.919418] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107712 00:24:45.442 [2024-11-21 05:11:01.919426] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106752 00:24:45.442 [2024-11-21 05:11:01.919436] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:24:45.442 [2024-11-21 05:11:01.919444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:45.442 [2024-11-21 05:11:01.919454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:45.442 [2024-11-21 05:11:01.919462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:45.442 [2024-11-21 05:11:01.919470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:45.442 [2024-11-21 05:11:01.919477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:45.442 [2024-11-21 05:11:01.919492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.442 [2024-11-21 05:11:01.919501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:45.442 [2024-11-21 05:11:01.919511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:24:45.442 [2024-11-21 05:11:01.919518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.922570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.442 [2024-11-21 05:11:01.922604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:45.442 [2024-11-21 05:11:01.922643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:24:45.442 [2024-11-21 05:11:01.922653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.922817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:45.442 [2024-11-21 05:11:01.922827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:45.442 [2024-11-21 05:11:01.922837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:45.442 [2024-11-21 05:11:01.922851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.932749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.932796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:45.442 [2024-11-21 05:11:01.932807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.932816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.932886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.932895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:45.442 [2024-11-21 05:11:01.932912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.932924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.932989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.933001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:45.442 [2024-11-21 05:11:01.933011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.933020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.933036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.933044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:45.442 [2024-11-21 05:11:01.933057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.933066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.951809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.952041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:45.442 [2024-11-21 05:11:01.952074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.952085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.967292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.967494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:45.442 [2024-11-21 05:11:01.967514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.967525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.967657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.967670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:45.442 [2024-11-21 05:11:01.967680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.967690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.967734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.967744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:45.442 [2024-11-21 05:11:01.967754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.442 [2024-11-21 05:11:01.967762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.442 [2024-11-21 05:11:01.967860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.442 [2024-11-21 05:11:01.967877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:45.442 [2024-11-21 05:11:01.967887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.443 [2024-11-21 05:11:01.967896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.443 [2024-11-21 05:11:01.967937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.443 [2024-11-21 05:11:01.967948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:45.443 [2024-11-21 05:11:01.967957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.443 [2024-11-21 05:11:01.967966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.443 [2024-11-21 05:11:01.968018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.443 [2024-11-21 05:11:01.968032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:45.443 [2024-11-21 05:11:01.968041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.443 [2024-11-21 05:11:01.968049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.443 [2024-11-21 05:11:01.968107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:45.443 [2024-11-21 05:11:01.968120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:45.443 [2024-11-21 05:11:01.968129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:45.443 [2024-11-21 05:11:01.968139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:45.443 [2024-11-21 05:11:01.968304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.863 ms, result 0 00:24:46.384 00:24:46.384 00:24:46.384 05:11:02 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:46.384 [2024-11-21 05:11:03.019190] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:24:46.384 [2024-11-21 05:11:03.019362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90749 ] 00:24:46.646 [2024-11-21 05:11:03.183927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.646 [2024-11-21 05:11:03.223906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.646 [2024-11-21 05:11:03.371028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.646 [2024-11-21 05:11:03.371432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.908 [2024-11-21 05:11:03.534318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-21 05:11:03.534380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:46.908 [2024-11-21 05:11:03.534398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:46.908 [2024-11-21 05:11:03.534407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.908 [2024-11-21 05:11:03.534475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-21 05:11:03.534490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.908 [2024-11-21 05:11:03.534499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:46.908 [2024-11-21 05:11:03.534512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.908 [2024-11-21 05:11:03.534537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:46.908 [2024-11-21 05:11:03.534833] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:46.908 [2024-11-21 05:11:03.534851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-21 05:11:03.534864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.908 [2024-11-21 05:11:03.534879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:24:46.908 [2024-11-21 05:11:03.534891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.908 [2024-11-21 05:11:03.537079] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:46.908 [2024-11-21 05:11:03.544229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-21 05:11:03.544337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:46.908 [2024-11-21 05:11:03.544393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.146 ms 00:24:46.908 [2024-11-21 05:11:03.544429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.908 [2024-11-21 05:11:03.544594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-21 05:11:03.544712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:46.908 [2024-11-21 05:11:03.544779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:46.908 [2024-11-21 05:11:03.544819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.908 [2024-11-21 05:11:03.556971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-21 05:11:03.557018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.908 [2024-11-21 05:11:03.557035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.895 ms 00:24:46.908 [2024-11-21 05:11:03.557044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.908 [2024-11-21 05:11:03.557157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.908 [2024-11-21 05:11:03.557168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.908 [2024-11-21 05:11:03.557208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:46.908 [2024-11-21 05:11:03.557217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.909 [2024-11-21 05:11:03.557282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.909 [2024-11-21 05:11:03.557294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:46.909 [2024-11-21 05:11:03.557307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:46.909 [2024-11-21 05:11:03.557315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.909 [2024-11-21 05:11:03.557344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:46.909 [2024-11-21 05:11:03.560024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.909 [2024-11-21 05:11:03.560213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.909 [2024-11-21 05:11:03.560239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:24:46.909 [2024-11-21 05:11:03.560248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.909 [2024-11-21 05:11:03.560295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.909 [2024-11-21 05:11:03.560309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:46.909 [2024-11-21 05:11:03.560323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:46.909 [2024-11-21 05:11:03.560332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.909 [2024-11-21 05:11:03.560361] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:46.909 [2024-11-21 05:11:03.560390] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:46.909 [2024-11-21 05:11:03.560431] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:46.909 [2024-11-21 05:11:03.560450] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:46.909 [2024-11-21 05:11:03.560567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:46.909 [2024-11-21 05:11:03.560579] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:46.909 [2024-11-21 05:11:03.560591] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:46.909 [2024-11-21 05:11:03.560606] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:46.909 [2024-11-21 05:11:03.560643] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:46.909 [2024-11-21 05:11:03.560657] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:46.909 [2024-11-21 05:11:03.560665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:46.909 [2024-11-21 05:11:03.560674] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:46.909 [2024-11-21 05:11:03.560681] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:46.909 [2024-11-21 05:11:03.560691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.909 [2024-11-21 05:11:03.560700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:46.909 [2024-11-21 05:11:03.560708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:46.909 [2024-11-21 05:11:03.560720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.909 [2024-11-21 05:11:03.560804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.909 [2024-11-21 05:11:03.560818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:46.909 [2024-11-21 05:11:03.560836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:46.909 [2024-11-21 05:11:03.560845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.909 [2024-11-21 05:11:03.560954] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:46.909 [2024-11-21 05:11:03.560966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:46.909 [2024-11-21 05:11:03.560976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.909 [2024-11-21 05:11:03.560986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.909 [2024-11-21 05:11:03.560998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:46.909 [2024-11-21 05:11:03.561014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:46.909 [2024-11-21 05:11:03.561031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:46.909 [2024-11-21 05:11:03.561040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.909 [2024-11-21 05:11:03.561055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:46.909 [2024-11-21 05:11:03.561062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:46.909 [2024-11-21 05:11:03.561069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.909 [2024-11-21 05:11:03.561077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:46.909 [2024-11-21 05:11:03.561089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:46.909 [2024-11-21 05:11:03.561103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:46.909 [2024-11-21 05:11:03.561119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:46.909 [2024-11-21 05:11:03.561127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:46.909 [2024-11-21 05:11:03.561145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.909 [2024-11-21 05:11:03.561161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:46.909 [2024-11-21 05:11:03.561169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.909 [2024-11-21 05:11:03.561198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:46.909 [2024-11-21 05:11:03.561206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.909 [2024-11-21 05:11:03.561222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:46.909 [2024-11-21 05:11:03.561231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:46.909 [2024-11-21 05:11:03.561249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:46.909 [2024-11-21 05:11:03.561257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.909 [2024-11-21 05:11:03.561272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:46.909 [2024-11-21 05:11:03.561278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:46.909 [2024-11-21 05:11:03.561285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.909 [2024-11-21 05:11:03.561293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:46.909 [2024-11-21 05:11:03.561301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:46.909 [2024-11-21 05:11:03.561308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:46.909 [2024-11-21 05:11:03.561322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:46.909 [2024-11-21 05:11:03.561329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561336] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:46.909 [2024-11-21 05:11:03.561344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:46.909 [2024-11-21 05:11:03.561358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.909 [2024-11-21 05:11:03.561369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.909 [2024-11-21 05:11:03.561381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:46.909 [2024-11-21 05:11:03.561389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:46.909 [2024-11-21 05:11:03.561396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:46.910 [2024-11-21 05:11:03.561403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:46.910 [2024-11-21 05:11:03.561411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:46.910 [2024-11-21 05:11:03.561417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:46.910 [2024-11-21 05:11:03.561426] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:46.910 [2024-11-21 05:11:03.561437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.910 [2024-11-21 05:11:03.561446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:46.910 [2024-11-21 05:11:03.561453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:46.910 [2024-11-21 05:11:03.561460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:46.910 [2024-11-21 05:11:03.561467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:46.910 [2024-11-21 05:11:03.561474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:46.910 [2024-11-21 05:11:03.561482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:46.910 [2024-11-21 05:11:03.561490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:46.910 [2024-11-21 05:11:03.561499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:46.910 [2024-11-21 05:11:03.561506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:46.910 [2024-11-21 05:11:03.561514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:46.910 [2024-11-21 05:11:03.561522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:46.910 [2024-11-21 05:11:03.561530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:46.910 [2024-11-21 05:11:03.561539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:46.910 [2024-11-21 05:11:03.561546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:46.910 [2024-11-21 05:11:03.561554] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:46.910 [2024-11-21 05:11:03.561563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.910 [2024-11-21 05:11:03.561572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:46.910 [2024-11-21 05:11:03.561579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:46.910 [2024-11-21 05:11:03.561586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:46.910 [2024-11-21 05:11:03.561593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:46.910 [2024-11-21 05:11:03.561601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.561623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:46.910 [2024-11-21 05:11:03.561632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:24:46.910 [2024-11-21 05:11:03.561643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.581957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.582007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.910 [2024-11-21 05:11:03.582020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.243 ms 00:24:46.910 [2024-11-21 05:11:03.582029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.582125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.582134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.910 [2024-11-21 05:11:03.582143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:46.910 [2024-11-21 05:11:03.582152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.608440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.608520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.910 [2024-11-21 05:11:03.608542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.223 ms 00:24:46.910 [2024-11-21 05:11:03.608558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.608662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.608682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.910 [2024-11-21 05:11:03.608698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:46.910 [2024-11-21 05:11:03.608712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.609544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.609600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.910 [2024-11-21 05:11:03.609653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:24:46.910 [2024-11-21 05:11:03.609667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.609898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.609915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.910 [2024-11-21 05:11:03.609930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:24:46.910 [2024-11-21 05:11:03.609943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.621699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.621754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.910 [2024-11-21 05:11:03.621774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.719 ms 00:24:46.910 [2024-11-21 05:11:03.621784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.910 [2024-11-21 05:11:03.626400] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:46.910 [2024-11-21 05:11:03.626456] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:46.910 [2024-11-21 05:11:03.626470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.910 [2024-11-21 05:11:03.626487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:46.910 [2024-11-21 05:11:03.626498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.566 ms 00:24:46.910 [2024-11-21 05:11:03.626506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.172 [2024-11-21 05:11:03.642862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.172 [2024-11-21 05:11:03.642913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:47.172 [2024-11-21 05:11:03.642925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.299 ms 00:24:47.172 [2024-11-21 05:11:03.642934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.172 [2024-11-21 05:11:03.646044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.172 [2024-11-21 05:11:03.646095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:47.172 [2024-11-21 05:11:03.646106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:24:47.172 [2024-11-21 05:11:03.646114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.172 [2024-11-21 05:11:03.648823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.172 [2024-11-21 05:11:03.649009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:47.172 [2024-11-21 05:11:03.649028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:24:47.172 [2024-11-21 05:11:03.649035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.172 [2024-11-21 05:11:03.649424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.172 [2024-11-21 05:11:03.649440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:47.172 [2024-11-21 05:11:03.649453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:47.172 [2024-11-21 05:11:03.649461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.172 [2024-11-21 05:11:03.678582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.172 [2024-11-21 05:11:03.678673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:47.172 [2024-11-21 05:11:03.678691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.091 ms 00:24:47.172 [2024-11-21 05:11:03.678701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.172 [2024-11-21 05:11:03.686958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:47.172 [2024-11-21 05:11:03.690792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.172 [2024-11-21 05:11:03.690842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:47.172 [2024-11-21 05:11:03.690856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.030 ms 00:24:47.172 [2024-11-21 05:11:03.690871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.172 [2024-11-21 05:11:03.690966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.172 [2024-11-21 05:11:03.690981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:47.172 [2024-11-21 05:11:03.690992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:47.173 [2024-11-21 05:11:03.691001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.173 [2024-11-21 05:11:03.693285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.173 [2024-11-21 05:11:03.693330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:47.173 [2024-11-21 05:11:03.693344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:24:47.173 [2024-11-21 05:11:03.693354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.173 [2024-11-21 05:11:03.693387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.173 [2024-11-21 05:11:03.693397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:47.173 [2024-11-21 05:11:03.693406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:47.173 [2024-11-21 05:11:03.693415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.173 [2024-11-21 05:11:03.693462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:47.173 [2024-11-21 05:11:03.693476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.173 [2024-11-21 05:11:03.693486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:47.173 [2024-11-21 05:11:03.693501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:47.173 [2024-11-21 05:11:03.693512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.173 [2024-11-21 05:11:03.700675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.173 [2024-11-21 05:11:03.700728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:47.173 [2024-11-21 05:11:03.700751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.143 ms 00:24:47.173 [2024-11-21 05:11:03.700759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.173 [2024-11-21 05:11:03.700856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.173 [2024-11-21 05:11:03.700867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:47.173 [2024-11-21 05:11:03.700877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:47.173 [2024-11-21 05:11:03.700891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.173 [2024-11-21 05:11:03.702401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 167.520 ms, result 0 00:24:48.564  [2024-11-21T05:11:06.243Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-21T05:11:07.187Z] Copying: 36/1024 [MB] (21 MBps) [2024-11-21T05:11:08.132Z] Copying: 56/1024 [MB] (19 MBps) [2024-11-21T05:11:09.076Z] Copying: 67/1024 [MB] (10 MBps) [2024-11-21T05:11:10.021Z] Copying: 85/1024 [MB] (17 MBps) [2024-11-21T05:11:10.964Z] Copying: 95/1024 [MB] (10 MBps) [2024-11-21T05:11:11.911Z] Copying: 106/1024 [MB] (10 MBps) [2024-11-21T05:11:13.301Z] Copying: 116/1024 [MB] (10 MBps) [2024-11-21T05:11:14.248Z] Copying: 128/1024 [MB] (12 MBps) [2024-11-21T05:11:15.191Z] Copying: 143/1024 [MB] (15 MBps) [2024-11-21T05:11:16.137Z] Copying: 159/1024 [MB] (15 MBps) [2024-11-21T05:11:17.143Z] Copying: 173/1024 [MB] (13 MBps) [2024-11-21T05:11:18.086Z] Copying: 186/1024 [MB] (13 MBps) [2024-11-21T05:11:19.026Z] Copying: 204/1024 [MB] (18 MBps) [2024-11-21T05:11:19.969Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-21T05:11:20.914Z] Copying: 234/1024 [MB] (18 MBps) [2024-11-21T05:11:22.302Z] Copying: 257/1024 [MB] (22 MBps) [2024-11-21T05:11:23.244Z] Copying: 278/1024 [MB] (21 MBps) [2024-11-21T05:11:24.186Z] Copying: 303/1024 [MB] (25 MBps) [2024-11-21T05:11:25.130Z] Copying: 318/1024 [MB] (14 MBps) [2024-11-21T05:11:26.075Z] Copying: 332/1024 [MB] (14 MBps) [2024-11-21T05:11:27.019Z] Copying: 347/1024 [MB] (14 MBps) [2024-11-21T05:11:27.962Z] Copying: 365/1024 [MB] (18 MBps) [2024-11-21T05:11:28.904Z] Copying: 385/1024 [MB] (20 MBps) [2024-11-21T05:11:30.289Z] Copying: 405/1024 [MB] (20 MBps) [2024-11-21T05:11:31.233Z] Copying: 416/1024 [MB] (10 MBps) [2024-11-21T05:11:32.179Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-21T05:11:33.123Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-21T05:11:34.069Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-21T05:11:35.011Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-21T05:11:35.954Z] Copying: 468/1024 [MB] (10 MBps) [2024-11-21T05:11:37.341Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-21T05:11:37.921Z] Copying: 490/1024 [MB] (11 MBps) [2024-11-21T05:11:39.309Z] Copying: 505/1024 [MB] (15 MBps) [2024-11-21T05:11:40.253Z] Copying: 525/1024 [MB] (19 MBps) [2024-11-21T05:11:41.198Z] Copying: 544/1024 [MB] (19 MBps) [2024-11-21T05:11:42.144Z] Copying: 555/1024 [MB] (11 MBps) [2024-11-21T05:11:43.089Z] Copying: 566/1024 [MB] (10 MBps) [2024-11-21T05:11:44.034Z] Copying: 580/1024 [MB] (13 MBps) [2024-11-21T05:11:44.977Z] Copying: 594/1024 [MB] (14 MBps) [2024-11-21T05:11:45.978Z] Copying: 608/1024 [MB] (13 MBps) [2024-11-21T05:11:46.923Z] Copying: 625/1024 [MB] (16 MBps) [2024-11-21T05:11:48.312Z] Copying: 635/1024 [MB] (10 MBps) [2024-11-21T05:11:49.257Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-21T05:11:50.198Z] Copying: 659/1024 [MB] (12 MBps) [2024-11-21T05:11:51.141Z] Copying: 670/1024 [MB] (10 MBps) [2024-11-21T05:11:52.077Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-21T05:11:53.020Z] Copying: 710/1024 [MB] (28 MBps) [2024-11-21T05:11:53.965Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-21T05:11:54.904Z] Copying: 735/1024 [MB] (13 MBps) [2024-11-21T05:11:56.289Z] Copying: 757/1024 [MB] (22 MBps) [2024-11-21T05:11:57.229Z] Copying: 777/1024 [MB] (19 MBps) [2024-11-21T05:11:58.170Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-21T05:11:59.114Z] Copying: 803/1024 [MB] (14 MBps) [2024-11-21T05:12:00.064Z] Copying: 817/1024 [MB] (13 MBps) [2024-11-21T05:12:01.009Z] Copying: 840/1024 [MB] (23 MBps) [2024-11-21T05:12:01.953Z] Copying: 852/1024 [MB] (11 MBps) [2024-11-21T05:12:03.333Z] Copying: 864/1024 [MB] (12 MBps) [2024-11-21T05:12:03.906Z] Copying: 882/1024 [MB] (18 MBps) [2024-11-21T05:12:05.290Z] Copying: 901/1024 [MB] (18 MBps) [2024-11-21T05:12:06.230Z] Copying: 926/1024 [MB] (24 MBps) [2024-11-21T05:12:07.168Z] Copying: 943/1024 [MB] (17 MBps) [2024-11-21T05:12:08.112Z] Copying: 967/1024 [MB] (24 MBps) [2024-11-21T05:12:09.054Z] Copying: 988/1024 [MB] (20 MBps) [2024-11-21T05:12:09.996Z] Copying: 1006/1024 [MB] (17 MBps) [2024-11-21T05:12:10.570Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-21 05:12:10.288534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.288695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.836 [2024-11-21 05:12:10.288723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:53.836 [2024-11-21 05:12:10.288739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.288781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.836 [2024-11-21 05:12:10.289886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.289940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.836 [2024-11-21 05:12:10.289960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:25:53.836 [2024-11-21 05:12:10.289985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.290397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.290428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.836 [2024-11-21 05:12:10.290445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:25:53.836 [2024-11-21 05:12:10.290461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.299594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.299656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.836 [2024-11-21 05:12:10.299668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.106 ms 00:25:53.836 [2024-11-21 05:12:10.299678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.305952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.305997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.836 [2024-11-21 05:12:10.306009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:25:53.836 [2024-11-21 05:12:10.306018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.309273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.309326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.836 [2024-11-21 05:12:10.309338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:25:53.836 [2024-11-21 05:12:10.309346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.314232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.314289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.836 [2024-11-21 05:12:10.314301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.839 ms 00:25:53.836 [2024-11-21 05:12:10.314310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.481970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.482034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.836 [2024-11-21 05:12:10.482050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 167.601 ms 00:25:53.836 [2024-11-21 05:12:10.482061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.484867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.484944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.836 [2024-11-21 05:12:10.484957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.785 ms 00:25:53.836 [2024-11-21 05:12:10.484966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.487464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.487510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.836 [2024-11-21 05:12:10.487521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.453 ms 00:25:53.836 [2024-11-21 05:12:10.487530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.489698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.489740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.836 [2024-11-21 05:12:10.489749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:25:53.836 [2024-11-21 05:12:10.489758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.491927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.836 [2024-11-21 05:12:10.491970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.836 [2024-11-21 05:12:10.491982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:25:53.836 [2024-11-21 05:12:10.491990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.836 [2024-11-21 05:12:10.492026] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.836 [2024-11-21 05:12:10.492047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:53.836 [2024-11-21 05:12:10.492059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:53.836 [2024-11-21 05:12:10.492068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.836 [2024-11-21 05:12:10.492077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.836 [2024-11-21 05:12:10.492086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.837 [2024-11-21 05:12:10.492883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.838 [2024-11-21 05:12:10.492892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.838 [2024-11-21 05:12:10.492900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.838 [2024-11-21 05:12:10.492909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.838 [2024-11-21 05:12:10.492918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.838 [2024-11-21 05:12:10.492926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.838 [2024-11-21 05:12:10.492944] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.838 [2024-11-21 05:12:10.492953] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d9c670ef-39b5-4ccb-82f5-cc919b0b9ecb 00:25:53.838 [2024-11-21 05:12:10.492964] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:53.838 [2024-11-21 05:12:10.492981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 25280 00:25:53.838 [2024-11-21 05:12:10.492999] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 24320 00:25:53.838 [2024-11-21 05:12:10.493010] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0395 00:25:53.838 [2024-11-21 05:12:10.493021] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.838 [2024-11-21 05:12:10.493030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.838 [2024-11-21 05:12:10.493038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.838 [2024-11-21 05:12:10.493047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.838 [2024-11-21 05:12:10.493054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.838 [2024-11-21 05:12:10.493063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.838 [2024-11-21 05:12:10.493083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.838 [2024-11-21 05:12:10.493092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:25:53.838 [2024-11-21 05:12:10.493101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.496019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.838 [2024-11-21 05:12:10.496060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.838 [2024-11-21 05:12:10.496071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:25:53.838 [2024-11-21 05:12:10.496081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.496222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.838 [2024-11-21 05:12:10.496231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.838 [2024-11-21 05:12:10.496240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:53.838 [2024-11-21 05:12:10.496249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.505659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.505701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.838 [2024-11-21 05:12:10.505713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.505725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.505796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.505807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.838 [2024-11-21 05:12:10.505817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.505828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.505905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.505918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.838 [2024-11-21 05:12:10.505927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.505936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.505952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.505962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.838 [2024-11-21 05:12:10.505973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.505983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.525077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.525138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.838 [2024-11-21 05:12:10.525152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.525164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.540907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.540966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.838 [2024-11-21 05:12:10.540981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.540992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.541062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.541082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.838 [2024-11-21 05:12:10.541093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.541103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.541149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.541160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.838 [2024-11-21 05:12:10.541170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.541179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.541282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.541294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.838 [2024-11-21 05:12:10.541309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.541323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.541357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.541370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.838 [2024-11-21 05:12:10.541379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.541389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.541446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.541458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.838 [2024-11-21 05:12:10.541471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.541481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.541541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.838 [2024-11-21 05:12:10.541563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.838 [2024-11-21 05:12:10.541573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.838 [2024-11-21 05:12:10.541584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.838 [2024-11-21 05:12:10.541851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 253.277 ms, result 0 00:25:54.099 00:25:54.099 00:25:54.099 05:12:10 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.647 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88598 00:25:56.647 05:12:13 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88598 ']' 00:25:56.647 Process with pid 88598 is not found 00:25:56.647 Remove shared memory files 00:25:56.647 05:12:13 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88598 00:25:56.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88598) - No such process 00:25:56.647 05:12:13 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88598 is not found' 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:56.647 05:12:13 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:56.647 00:25:56.647 real 4m37.386s 00:25:56.647 user 4m25.384s 00:25:56.647 sys 0m11.802s 00:25:56.647 05:12:13 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.647 ************************************ 00:25:56.647 END TEST ftl_restore 00:25:56.647 ************************************ 00:25:56.647 05:12:13 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:56.647 05:12:13 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:56.647 05:12:13 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:56.647 05:12:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.647 05:12:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:56.647 ************************************ 00:25:56.647 START TEST ftl_dirty_shutdown 00:25:56.647 ************************************ 00:25:56.647 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:56.647 * Looking for test storage... 00:25:56.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.647 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:56.647 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:56.647 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:56.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.909 --rc genhtml_branch_coverage=1 00:25:56.909 --rc genhtml_function_coverage=1 00:25:56.909 --rc genhtml_legend=1 00:25:56.909 --rc geninfo_all_blocks=1 00:25:56.909 --rc geninfo_unexecuted_blocks=1 00:25:56.909 00:25:56.909 ' 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:56.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.909 --rc genhtml_branch_coverage=1 00:25:56.909 --rc genhtml_function_coverage=1 00:25:56.909 --rc genhtml_legend=1 00:25:56.909 --rc geninfo_all_blocks=1 00:25:56.909 --rc geninfo_unexecuted_blocks=1 00:25:56.909 00:25:56.909 ' 00:25:56.909 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:56.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.909 --rc genhtml_branch_coverage=1 00:25:56.909 --rc genhtml_function_coverage=1 00:25:56.909 --rc genhtml_legend=1 00:25:56.910 --rc geninfo_all_blocks=1 00:25:56.910 --rc geninfo_unexecuted_blocks=1 00:25:56.910 00:25:56.910 ' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:56.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.910 --rc genhtml_branch_coverage=1 00:25:56.910 --rc genhtml_function_coverage=1 00:25:56.910 --rc genhtml_legend=1 00:25:56.910 --rc geninfo_all_blocks=1 00:25:56.910 --rc geninfo_unexecuted_blocks=1 00:25:56.910 00:25:56.910 ' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91524 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91524 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91524 ']' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:56.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.910 05:12:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:56.910 [2024-11-21 05:12:13.520886] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:25:56.910 [2024-11-21 05:12:13.521044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91524 ] 00:25:57.174 [2024-11-21 05:12:13.684878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.174 [2024-11-21 05:12:13.713316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:57.820 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:58.082 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:58.344 { 00:25:58.344 "name": "nvme0n1", 00:25:58.344 "aliases": [ 00:25:58.344 "47e9f34a-6df4-4d18-817c-20a3eac335e2" 00:25:58.344 ], 00:25:58.344 "product_name": "NVMe disk", 00:25:58.344 "block_size": 4096, 00:25:58.344 "num_blocks": 1310720, 00:25:58.344 "uuid": "47e9f34a-6df4-4d18-817c-20a3eac335e2", 00:25:58.344 "numa_id": -1, 00:25:58.344 "assigned_rate_limits": { 00:25:58.344 "rw_ios_per_sec": 0, 00:25:58.344 "rw_mbytes_per_sec": 0, 00:25:58.344 "r_mbytes_per_sec": 0, 00:25:58.344 "w_mbytes_per_sec": 0 00:25:58.344 }, 00:25:58.344 "claimed": true, 00:25:58.344 "claim_type": "read_many_write_one", 00:25:58.344 "zoned": false, 00:25:58.344 "supported_io_types": { 00:25:58.344 "read": true, 00:25:58.344 "write": true, 00:25:58.344 "unmap": true, 00:25:58.344 "flush": true, 00:25:58.344 "reset": true, 00:25:58.344 "nvme_admin": true, 00:25:58.344 "nvme_io": true, 00:25:58.344 "nvme_io_md": false, 00:25:58.344 "write_zeroes": true, 00:25:58.344 "zcopy": false, 00:25:58.344 "get_zone_info": false, 00:25:58.344 "zone_management": false, 00:25:58.344 "zone_append": false, 00:25:58.344 "compare": true, 00:25:58.344 "compare_and_write": false, 00:25:58.344 "abort": true, 00:25:58.344 "seek_hole": false, 00:25:58.344 "seek_data": false, 00:25:58.344 "copy": true, 00:25:58.344 "nvme_iov_md": false 00:25:58.344 }, 00:25:58.344 "driver_specific": { 00:25:58.344 "nvme": [ 00:25:58.344 { 00:25:58.344 "pci_address": "0000:00:11.0", 00:25:58.344 "trid": { 00:25:58.344 "trtype": "PCIe", 00:25:58.344 "traddr": "0000:00:11.0" 00:25:58.344 }, 00:25:58.344 "ctrlr_data": { 00:25:58.344 "cntlid": 0, 00:25:58.344 "vendor_id": "0x1b36", 00:25:58.344 "model_number": "QEMU NVMe Ctrl", 00:25:58.344 "serial_number": "12341", 00:25:58.344 "firmware_revision": "8.0.0", 00:25:58.344 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:58.344 "oacs": { 00:25:58.344 "security": 0, 00:25:58.344 "format": 1, 00:25:58.344 "firmware": 0, 00:25:58.344 "ns_manage": 1 00:25:58.344 }, 00:25:58.344 "multi_ctrlr": false, 00:25:58.344 "ana_reporting": false 00:25:58.344 }, 00:25:58.344 "vs": { 00:25:58.344 "nvme_version": "1.4" 00:25:58.344 }, 00:25:58.344 "ns_data": { 00:25:58.344 "id": 1, 00:25:58.344 "can_share": false 00:25:58.344 } 00:25:58.344 } 00:25:58.344 ], 00:25:58.344 "mp_policy": "active_passive" 00:25:58.344 } 00:25:58.344 } 00:25:58.344 ]' 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:58.344 05:12:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:58.606 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=fad6dd4a-ba6c-403d-a636-9972cdfd04f5 00:25:58.606 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:58.606 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fad6dd4a-ba6c-403d-a636-9972cdfd04f5 00:25:58.867 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:59.129 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=29125aa1-3373-444f-af12-bdfc51a7eb82 00:25:59.129 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 29125aa1-3373-444f-af12-bdfc51a7eb82 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:59.390 05:12:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.390 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:59.390 { 00:25:59.390 "name": "5319150d-0ca9-4abc-bdba-fcf527a30c6d", 00:25:59.390 "aliases": [ 00:25:59.390 "lvs/nvme0n1p0" 00:25:59.390 ], 00:25:59.390 "product_name": "Logical Volume", 00:25:59.390 "block_size": 4096, 00:25:59.390 "num_blocks": 26476544, 00:25:59.390 "uuid": "5319150d-0ca9-4abc-bdba-fcf527a30c6d", 00:25:59.390 "assigned_rate_limits": { 00:25:59.390 "rw_ios_per_sec": 0, 00:25:59.390 "rw_mbytes_per_sec": 0, 00:25:59.390 "r_mbytes_per_sec": 0, 00:25:59.390 "w_mbytes_per_sec": 0 00:25:59.390 }, 00:25:59.390 "claimed": false, 00:25:59.390 "zoned": false, 00:25:59.390 "supported_io_types": { 00:25:59.390 "read": true, 00:25:59.390 "write": true, 00:25:59.390 "unmap": true, 00:25:59.390 "flush": false, 00:25:59.390 "reset": true, 00:25:59.390 "nvme_admin": false, 00:25:59.390 "nvme_io": false, 00:25:59.390 "nvme_io_md": false, 00:25:59.390 "write_zeroes": true, 00:25:59.390 "zcopy": false, 00:25:59.390 "get_zone_info": false, 00:25:59.390 "zone_management": false, 00:25:59.390 "zone_append": false, 00:25:59.390 "compare": false, 00:25:59.390 "compare_and_write": false, 00:25:59.390 "abort": false, 00:25:59.390 "seek_hole": true, 00:25:59.390 "seek_data": true, 00:25:59.390 "copy": false, 00:25:59.390 "nvme_iov_md": false 00:25:59.390 }, 00:25:59.390 "driver_specific": { 00:25:59.390 "lvol": { 00:25:59.390 "lvol_store_uuid": "29125aa1-3373-444f-af12-bdfc51a7eb82", 00:25:59.390 "base_bdev": "nvme0n1", 00:25:59.390 "thin_provision": true, 00:25:59.390 "num_allocated_clusters": 0, 00:25:59.390 "snapshot": false, 00:25:59.390 "clone": false, 00:25:59.390 "esnap_clone": false 00:25:59.390 } 00:25:59.390 } 00:25:59.390 } 00:25:59.390 ]' 00:25:59.390 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:59.390 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:59.390 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:59.652 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:59.652 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:59.652 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:59.652 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:59.652 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:59.652 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:59.912 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.171 { 00:26:00.171 "name": "5319150d-0ca9-4abc-bdba-fcf527a30c6d", 00:26:00.171 "aliases": [ 00:26:00.171 "lvs/nvme0n1p0" 00:26:00.171 ], 00:26:00.171 "product_name": "Logical Volume", 00:26:00.171 "block_size": 4096, 00:26:00.171 "num_blocks": 26476544, 00:26:00.171 "uuid": "5319150d-0ca9-4abc-bdba-fcf527a30c6d", 00:26:00.171 "assigned_rate_limits": { 00:26:00.171 "rw_ios_per_sec": 0, 00:26:00.171 "rw_mbytes_per_sec": 0, 00:26:00.171 "r_mbytes_per_sec": 0, 00:26:00.171 "w_mbytes_per_sec": 0 00:26:00.171 }, 00:26:00.171 "claimed": false, 00:26:00.171 "zoned": false, 00:26:00.171 "supported_io_types": { 00:26:00.171 "read": true, 00:26:00.171 "write": true, 00:26:00.171 "unmap": true, 00:26:00.171 "flush": false, 00:26:00.171 "reset": true, 00:26:00.171 "nvme_admin": false, 00:26:00.171 "nvme_io": false, 00:26:00.171 "nvme_io_md": false, 00:26:00.171 "write_zeroes": true, 00:26:00.171 "zcopy": false, 00:26:00.171 "get_zone_info": false, 00:26:00.171 "zone_management": false, 00:26:00.171 "zone_append": false, 00:26:00.171 "compare": false, 00:26:00.171 "compare_and_write": false, 00:26:00.171 "abort": false, 00:26:00.171 "seek_hole": true, 00:26:00.171 "seek_data": true, 00:26:00.171 "copy": false, 00:26:00.171 "nvme_iov_md": false 00:26:00.171 }, 00:26:00.171 "driver_specific": { 00:26:00.171 "lvol": { 00:26:00.171 "lvol_store_uuid": "29125aa1-3373-444f-af12-bdfc51a7eb82", 00:26:00.171 "base_bdev": "nvme0n1", 00:26:00.171 "thin_provision": true, 00:26:00.171 "num_allocated_clusters": 0, 00:26:00.171 "snapshot": false, 00:26:00.171 "clone": false, 00:26:00.171 "esnap_clone": false 00:26:00.171 } 00:26:00.171 } 00:26:00.171 } 00:26:00.171 ]' 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:00.171 05:12:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:00.430 05:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:00.430 05:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:26:00.430 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:26:00.430 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:00.430 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:00.430 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:00.430 05:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5319150d-0ca9-4abc-bdba-fcf527a30c6d 00:26:00.430 05:12:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.430 { 00:26:00.430 "name": "5319150d-0ca9-4abc-bdba-fcf527a30c6d", 00:26:00.430 "aliases": [ 00:26:00.430 "lvs/nvme0n1p0" 00:26:00.430 ], 00:26:00.430 "product_name": "Logical Volume", 00:26:00.430 "block_size": 4096, 00:26:00.430 "num_blocks": 26476544, 00:26:00.430 "uuid": "5319150d-0ca9-4abc-bdba-fcf527a30c6d", 00:26:00.430 "assigned_rate_limits": { 00:26:00.430 "rw_ios_per_sec": 0, 00:26:00.430 "rw_mbytes_per_sec": 0, 00:26:00.430 "r_mbytes_per_sec": 0, 00:26:00.430 "w_mbytes_per_sec": 0 00:26:00.430 }, 00:26:00.430 "claimed": false, 00:26:00.430 "zoned": false, 00:26:00.430 "supported_io_types": { 00:26:00.430 "read": true, 00:26:00.430 "write": true, 00:26:00.430 "unmap": true, 00:26:00.430 "flush": false, 00:26:00.430 "reset": true, 00:26:00.430 "nvme_admin": false, 00:26:00.430 "nvme_io": false, 00:26:00.430 "nvme_io_md": false, 00:26:00.430 "write_zeroes": true, 00:26:00.430 "zcopy": false, 00:26:00.430 "get_zone_info": false, 00:26:00.430 "zone_management": false, 00:26:00.430 "zone_append": false, 00:26:00.430 "compare": false, 00:26:00.430 "compare_and_write": false, 00:26:00.430 "abort": false, 00:26:00.430 "seek_hole": true, 00:26:00.430 "seek_data": true, 00:26:00.430 "copy": false, 00:26:00.430 "nvme_iov_md": false 00:26:00.430 }, 00:26:00.430 "driver_specific": { 00:26:00.430 "lvol": { 00:26:00.430 "lvol_store_uuid": "29125aa1-3373-444f-af12-bdfc51a7eb82", 00:26:00.430 "base_bdev": "nvme0n1", 00:26:00.430 "thin_provision": true, 00:26:00.430 "num_allocated_clusters": 0, 00:26:00.430 "snapshot": false, 00:26:00.430 "clone": false, 00:26:00.430 "esnap_clone": false 00:26:00.430 } 00:26:00.430 } 00:26:00.430 } 00:26:00.430 ]' 00:26:00.430 05:12:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5319150d-0ca9-4abc-bdba-fcf527a30c6d --l2p_dram_limit 10' 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:00.689 05:12:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5319150d-0ca9-4abc-bdba-fcf527a30c6d --l2p_dram_limit 10 -c nvc0n1p0 00:26:00.689 [2024-11-21 05:12:17.389576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.389636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:00.689 [2024-11-21 05:12:17.389650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:00.689 [2024-11-21 05:12:17.389659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.389711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.389721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.689 [2024-11-21 05:12:17.389730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:00.689 [2024-11-21 05:12:17.389741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.389761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:00.689 [2024-11-21 05:12:17.390292] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:00.689 [2024-11-21 05:12:17.390325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.390341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.689 [2024-11-21 05:12:17.390352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:26:00.689 [2024-11-21 05:12:17.390360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.390434] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 03f7d698-2ac6-4548-b62b-f05d68202e0f 00:26:00.689 [2024-11-21 05:12:17.391769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.391798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:00.689 [2024-11-21 05:12:17.391811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:00.689 [2024-11-21 05:12:17.391817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.398866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.398895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.689 [2024-11-21 05:12:17.398905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.011 ms 00:26:00.689 [2024-11-21 05:12:17.398912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.398978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.398987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.689 [2024-11-21 05:12:17.398996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:00.689 [2024-11-21 05:12:17.399002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.399054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.399062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:00.689 [2024-11-21 05:12:17.399071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:00.689 [2024-11-21 05:12:17.399077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.399098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:00.689 [2024-11-21 05:12:17.400764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.400790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.689 [2024-11-21 05:12:17.400798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:26:00.689 [2024-11-21 05:12:17.400806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.689 [2024-11-21 05:12:17.400834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.689 [2024-11-21 05:12:17.400843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:00.689 [2024-11-21 05:12:17.400849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:00.690 [2024-11-21 05:12:17.400859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.690 [2024-11-21 05:12:17.400872] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:00.690 [2024-11-21 05:12:17.400988] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:00.690 [2024-11-21 05:12:17.400998] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:00.690 [2024-11-21 05:12:17.401013] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:00.690 [2024-11-21 05:12:17.401021] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401039] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:00.690 [2024-11-21 05:12:17.401050] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:00.690 [2024-11-21 05:12:17.401056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:00.690 [2024-11-21 05:12:17.401063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:00.690 [2024-11-21 05:12:17.401069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.690 [2024-11-21 05:12:17.401077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:00.690 [2024-11-21 05:12:17.401083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:26:00.690 [2024-11-21 05:12:17.401090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.690 [2024-11-21 05:12:17.401153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.690 [2024-11-21 05:12:17.401163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:00.690 [2024-11-21 05:12:17.401169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:00.690 [2024-11-21 05:12:17.401177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.690 [2024-11-21 05:12:17.401260] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:00.690 [2024-11-21 05:12:17.401272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:00.690 [2024-11-21 05:12:17.401279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:00.690 [2024-11-21 05:12:17.401300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:00.690 [2024-11-21 05:12:17.401318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.690 [2024-11-21 05:12:17.401330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:00.690 [2024-11-21 05:12:17.401339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:00.690 [2024-11-21 05:12:17.401345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.690 [2024-11-21 05:12:17.401353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:00.690 [2024-11-21 05:12:17.401359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:00.690 [2024-11-21 05:12:17.401366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:00.690 [2024-11-21 05:12:17.401378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:00.690 [2024-11-21 05:12:17.401396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:00.690 [2024-11-21 05:12:17.401416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:00.690 [2024-11-21 05:12:17.401436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:00.690 [2024-11-21 05:12:17.401459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:00.690 [2024-11-21 05:12:17.401478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.690 [2024-11-21 05:12:17.401491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:00.690 [2024-11-21 05:12:17.401499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:00.690 [2024-11-21 05:12:17.401505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.690 [2024-11-21 05:12:17.401513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:00.690 [2024-11-21 05:12:17.401519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:00.690 [2024-11-21 05:12:17.401527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:00.690 [2024-11-21 05:12:17.401540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:00.690 [2024-11-21 05:12:17.401546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401553] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:00.690 [2024-11-21 05:12:17.401563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:00.690 [2024-11-21 05:12:17.401573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.690 [2024-11-21 05:12:17.401589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:00.690 [2024-11-21 05:12:17.401595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:00.690 [2024-11-21 05:12:17.401602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:00.690 [2024-11-21 05:12:17.401632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:00.690 [2024-11-21 05:12:17.401641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:00.690 [2024-11-21 05:12:17.401647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:00.690 [2024-11-21 05:12:17.401658] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:00.690 [2024-11-21 05:12:17.401669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.690 [2024-11-21 05:12:17.401679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:00.691 [2024-11-21 05:12:17.401686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:00.691 [2024-11-21 05:12:17.401694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:00.691 [2024-11-21 05:12:17.401701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:00.691 [2024-11-21 05:12:17.401710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:00.691 [2024-11-21 05:12:17.401716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:00.691 [2024-11-21 05:12:17.401726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:00.691 [2024-11-21 05:12:17.401733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:00.691 [2024-11-21 05:12:17.401740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:00.691 [2024-11-21 05:12:17.401747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:00.691 [2024-11-21 05:12:17.401755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:00.691 [2024-11-21 05:12:17.401761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:00.691 [2024-11-21 05:12:17.401769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:00.691 [2024-11-21 05:12:17.401775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:00.691 [2024-11-21 05:12:17.401783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:00.691 [2024-11-21 05:12:17.401790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.691 [2024-11-21 05:12:17.401799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.691 [2024-11-21 05:12:17.401811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:00.691 [2024-11-21 05:12:17.401819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:00.691 [2024-11-21 05:12:17.401825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:00.691 [2024-11-21 05:12:17.401834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.691 [2024-11-21 05:12:17.401839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:00.691 [2024-11-21 05:12:17.401848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:26:00.691 [2024-11-21 05:12:17.401853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.691 [2024-11-21 05:12:17.401888] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:00.691 [2024-11-21 05:12:17.401896] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:04.895 [2024-11-21 05:12:21.458999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.459387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:04.895 [2024-11-21 05:12:21.459421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4057.081 ms 00:26:04.895 [2024-11-21 05:12:21.459432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.478730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.478788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:04.895 [2024-11-21 05:12:21.478807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.151 ms 00:26:04.895 [2024-11-21 05:12:21.478818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.478962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.478975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:04.895 [2024-11-21 05:12:21.478988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:26:04.895 [2024-11-21 05:12:21.478998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.496510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.496565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:04.895 [2024-11-21 05:12:21.496581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.463 ms 00:26:04.895 [2024-11-21 05:12:21.496591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.496663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.496674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:04.895 [2024-11-21 05:12:21.496686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:04.895 [2024-11-21 05:12:21.496694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.497426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.497459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:04.895 [2024-11-21 05:12:21.497473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:26:04.895 [2024-11-21 05:12:21.497483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.497631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.497646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:04.895 [2024-11-21 05:12:21.497657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:26:04.895 [2024-11-21 05:12:21.497667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.509149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.509350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:04.895 [2024-11-21 05:12:21.509378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.453 ms 00:26:04.895 [2024-11-21 05:12:21.509388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.520723] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:04.895 [2024-11-21 05:12:21.525706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.525752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:04.895 [2024-11-21 05:12:21.525766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.219 ms 00:26:04.895 [2024-11-21 05:12:21.525778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.618077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.618370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:04.895 [2024-11-21 05:12:21.618402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.263 ms 00:26:04.895 [2024-11-21 05:12:21.618419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.618669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.618686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:04.895 [2024-11-21 05:12:21.618697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:26:04.895 [2024-11-21 05:12:21.618708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.895 [2024-11-21 05:12:21.624545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.895 [2024-11-21 05:12:21.624604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:04.895 [2024-11-21 05:12:21.624635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.780 ms 00:26:04.895 [2024-11-21 05:12:21.624651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.157 [2024-11-21 05:12:21.629642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.157 [2024-11-21 05:12:21.629691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:05.157 [2024-11-21 05:12:21.629703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:26:05.157 [2024-11-21 05:12:21.629713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.157 [2024-11-21 05:12:21.630069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.157 [2024-11-21 05:12:21.630084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.157 [2024-11-21 05:12:21.630094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:26:05.157 [2024-11-21 05:12:21.630108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.157 [2024-11-21 05:12:21.672454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.157 [2024-11-21 05:12:21.672681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:05.157 [2024-11-21 05:12:21.672703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.317 ms 00:26:05.157 [2024-11-21 05:12:21.672720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.157 [2024-11-21 05:12:21.680595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.157 [2024-11-21 05:12:21.680665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:05.157 [2024-11-21 05:12:21.680678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.810 ms 00:26:05.157 [2024-11-21 05:12:21.680690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.157 [2024-11-21 05:12:21.686425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.157 [2024-11-21 05:12:21.686476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:05.157 [2024-11-21 05:12:21.686487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.687 ms 00:26:05.157 [2024-11-21 05:12:21.686496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.157 [2024-11-21 05:12:21.692541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.157 [2024-11-21 05:12:21.692732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:05.157 [2024-11-21 05:12:21.692751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.998 ms 00:26:05.157 [2024-11-21 05:12:21.692765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.157 [2024-11-21 05:12:21.692811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.157 [2024-11-21 05:12:21.692826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:05.158 [2024-11-21 05:12:21.692836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:05.158 [2024-11-21 05:12:21.692858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.158 [2024-11-21 05:12:21.692963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.158 [2024-11-21 05:12:21.692977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:05.158 [2024-11-21 05:12:21.692987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:05.158 [2024-11-21 05:12:21.692999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.158 [2024-11-21 05:12:21.694412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4304.277 ms, result 0 00:26:05.158 { 00:26:05.158 "name": "ftl0", 00:26:05.158 "uuid": "03f7d698-2ac6-4548-b62b-f05d68202e0f" 00:26:05.158 } 00:26:05.158 05:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:05.158 05:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:05.419 05:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:05.419 05:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:05.419 05:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:05.680 /dev/nbd0 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:05.680 1+0 records in 00:26:05.680 1+0 records out 00:26:05.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557599 s, 7.3 MB/s 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:05.680 05:12:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:05.680 [2024-11-21 05:12:22.274183] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:26:05.680 [2024-11-21 05:12:22.274937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91676 ] 00:26:05.941 [2024-11-21 05:12:22.439454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.941 [2024-11-21 05:12:22.480037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.887  [2024-11-21T05:12:25.008Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-21T05:12:25.944Z] Copying: 374/1024 [MB] (188 MBps) [2024-11-21T05:12:26.880Z] Copying: 623/1024 [MB] (248 MBps) [2024-11-21T05:12:27.449Z] Copying: 873/1024 [MB] (249 MBps) [2024-11-21T05:12:27.449Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:26:10.715 00:26:10.715 05:12:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:13.261 05:12:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:13.261 [2024-11-21 05:12:29.479823] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:26:13.261 [2024-11-21 05:12:29.479923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91753 ] 00:26:13.261 [2024-11-21 05:12:29.647297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.261 [2024-11-21 05:12:29.671940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.202  [2024-11-21T05:12:31.880Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-21T05:12:32.814Z] Copying: 38/1024 [MB] (19 MBps) [2024-11-21T05:12:33.750Z] Copying: 66/1024 [MB] (28 MBps) [2024-11-21T05:12:35.126Z] Copying: 98/1024 [MB] (31 MBps) [2024-11-21T05:12:36.062Z] Copying: 129/1024 [MB] (30 MBps) [2024-11-21T05:12:36.995Z] Copying: 159/1024 [MB] (30 MBps) [2024-11-21T05:12:37.928Z] Copying: 193/1024 [MB] (33 MBps) [2024-11-21T05:12:38.862Z] Copying: 225/1024 [MB] (32 MBps) [2024-11-21T05:12:39.795Z] Copying: 257/1024 [MB] (32 MBps) [2024-11-21T05:12:41.171Z] Copying: 290/1024 [MB] (32 MBps) [2024-11-21T05:12:41.743Z] Copying: 320/1024 [MB] (29 MBps) [2024-11-21T05:12:43.131Z] Copying: 336/1024 [MB] (16 MBps) [2024-11-21T05:12:44.104Z] Copying: 353/1024 [MB] (16 MBps) [2024-11-21T05:12:45.044Z] Copying: 368/1024 [MB] (15 MBps) [2024-11-21T05:12:45.982Z] Copying: 383/1024 [MB] (14 MBps) [2024-11-21T05:12:46.928Z] Copying: 416/1024 [MB] (32 MBps) [2024-11-21T05:12:47.871Z] Copying: 429/1024 [MB] (13 MBps) [2024-11-21T05:12:48.815Z] Copying: 442/1024 [MB] (12 MBps) [2024-11-21T05:12:49.753Z] Copying: 455/1024 [MB] (12 MBps) [2024-11-21T05:12:51.137Z] Copying: 483/1024 [MB] (28 MBps) [2024-11-21T05:12:52.081Z] Copying: 502/1024 [MB] (18 MBps) [2024-11-21T05:12:53.024Z] Copying: 520/1024 [MB] (18 MBps) [2024-11-21T05:12:53.966Z] Copying: 538/1024 [MB] (17 MBps) [2024-11-21T05:12:54.908Z] Copying: 556/1024 [MB] (18 MBps) [2024-11-21T05:12:55.852Z] Copying: 575/1024 [MB] (18 MBps) [2024-11-21T05:12:56.796Z] Copying: 588/1024 [MB] (12 MBps) [2024-11-21T05:12:57.741Z] Copying: 602/1024 [MB] (14 MBps) [2024-11-21T05:12:59.124Z] Copying: 617/1024 [MB] (14 MBps) [2024-11-21T05:13:00.074Z] Copying: 632/1024 [MB] (15 MBps) [2024-11-21T05:13:01.012Z] Copying: 646/1024 [MB] (14 MBps) [2024-11-21T05:13:01.955Z] Copying: 672/1024 [MB] (26 MBps) [2024-11-21T05:13:02.901Z] Copying: 689/1024 [MB] (16 MBps) [2024-11-21T05:13:03.844Z] Copying: 703/1024 [MB] (14 MBps) [2024-11-21T05:13:04.778Z] Copying: 724/1024 [MB] (20 MBps) [2024-11-21T05:13:06.163Z] Copying: 761/1024 [MB] (37 MBps) [2024-11-21T05:13:07.107Z] Copying: 779/1024 [MB] (17 MBps) [2024-11-21T05:13:08.049Z] Copying: 794/1024 [MB] (14 MBps) [2024-11-21T05:13:08.993Z] Copying: 811/1024 [MB] (16 MBps) [2024-11-21T05:13:09.937Z] Copying: 827/1024 [MB] (16 MBps) [2024-11-21T05:13:10.942Z] Copying: 841/1024 [MB] (13 MBps) [2024-11-21T05:13:11.885Z] Copying: 859/1024 [MB] (18 MBps) [2024-11-21T05:13:12.867Z] Copying: 873/1024 [MB] (14 MBps) [2024-11-21T05:13:13.837Z] Copying: 888/1024 [MB] (14 MBps) [2024-11-21T05:13:14.778Z] Copying: 906/1024 [MB] (18 MBps) [2024-11-21T05:13:16.157Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-21T05:13:17.092Z] Copying: 933/1024 [MB] (16 MBps) [2024-11-21T05:13:18.032Z] Copying: 971/1024 [MB] (37 MBps) [2024-11-21T05:13:18.976Z] Copying: 995/1024 [MB] (24 MBps) [2024-11-21T05:13:19.917Z] Copying: 1010/1024 [MB] (14 MBps) [2024-11-21T05:13:19.917Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:27:03.183 00:27:03.183 05:13:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:03.183 05:13:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:03.442 05:13:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:03.442 [2024-11-21 05:13:20.159449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.442 [2024-11-21 05:13:20.159492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:03.442 [2024-11-21 05:13:20.159507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:03.442 [2024-11-21 05:13:20.159516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.442 [2024-11-21 05:13:20.159540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:03.442 [2024-11-21 05:13:20.160103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.442 [2024-11-21 05:13:20.160133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:03.442 [2024-11-21 05:13:20.160142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:27:03.442 [2024-11-21 05:13:20.160150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.442 [2024-11-21 05:13:20.162082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.442 [2024-11-21 05:13:20.162117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:03.442 [2024-11-21 05:13:20.162130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:27:03.442 [2024-11-21 05:13:20.162138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.175454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.175485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:03.704 [2024-11-21 05:13:20.175497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.302 ms 00:27:03.704 [2024-11-21 05:13:20.175506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.180331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.180359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:03.704 [2024-11-21 05:13:20.180368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.799 ms 00:27:03.704 [2024-11-21 05:13:20.180377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.181530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.181565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:03.704 [2024-11-21 05:13:20.181573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:27:03.704 [2024-11-21 05:13:20.181583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.185993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.186034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:03.704 [2024-11-21 05:13:20.186042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.367 ms 00:27:03.704 [2024-11-21 05:13:20.186049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.186144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.186154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:03.704 [2024-11-21 05:13:20.186161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:03.704 [2024-11-21 05:13:20.186168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.187968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.187998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:03.704 [2024-11-21 05:13:20.188006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:27:03.704 [2024-11-21 05:13:20.188013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.189444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.189475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:03.704 [2024-11-21 05:13:20.189483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:27:03.704 [2024-11-21 05:13:20.189490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.190590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.190626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:03.704 [2024-11-21 05:13:20.190634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:27:03.704 [2024-11-21 05:13:20.190641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.191370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.704 [2024-11-21 05:13:20.191400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:03.704 [2024-11-21 05:13:20.191406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:27:03.704 [2024-11-21 05:13:20.191415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.704 [2024-11-21 05:13:20.191439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:03.704 [2024-11-21 05:13:20.191453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:03.704 [2024-11-21 05:13:20.191713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.191994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:03.705 [2024-11-21 05:13:20.192180] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:03.705 [2024-11-21 05:13:20.192187] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f7d698-2ac6-4548-b62b-f05d68202e0f 00:27:03.705 [2024-11-21 05:13:20.192195] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:03.705 [2024-11-21 05:13:20.192201] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:03.705 [2024-11-21 05:13:20.192208] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:03.705 [2024-11-21 05:13:20.192214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:03.705 [2024-11-21 05:13:20.192221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:03.705 [2024-11-21 05:13:20.192227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:03.705 [2024-11-21 05:13:20.192235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:03.705 [2024-11-21 05:13:20.192239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:03.705 [2024-11-21 05:13:20.192246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:03.705 [2024-11-21 05:13:20.192252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.705 [2024-11-21 05:13:20.192259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:03.705 [2024-11-21 05:13:20.192266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:27:03.705 [2024-11-21 05:13:20.192275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.705 [2024-11-21 05:13:20.194068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.705 [2024-11-21 05:13:20.194087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:03.705 [2024-11-21 05:13:20.194095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:27:03.705 [2024-11-21 05:13:20.194104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.705 [2024-11-21 05:13:20.194191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.705 [2024-11-21 05:13:20.194199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:03.705 [2024-11-21 05:13:20.194209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:03.705 [2024-11-21 05:13:20.194216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.705 [2024-11-21 05:13:20.200542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.705 [2024-11-21 05:13:20.200574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:03.705 [2024-11-21 05:13:20.200583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.705 [2024-11-21 05:13:20.200595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.705 [2024-11-21 05:13:20.200658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.705 [2024-11-21 05:13:20.200668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:03.705 [2024-11-21 05:13:20.200677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.705 [2024-11-21 05:13:20.200688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.705 [2024-11-21 05:13:20.200749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.200762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:03.706 [2024-11-21 05:13:20.200769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.200776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.200790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.200798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:03.706 [2024-11-21 05:13:20.200804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.200814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.212108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.212147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:03.706 [2024-11-21 05:13:20.212157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.212165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.221043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:03.706 [2024-11-21 05:13:20.221052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.221064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.221143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:03.706 [2024-11-21 05:13:20.221149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.221158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.221212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:03.706 [2024-11-21 05:13:20.221220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.221228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.221301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:03.706 [2024-11-21 05:13:20.221308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.221316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.221349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:03.706 [2024-11-21 05:13:20.221355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.221363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.221411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:03.706 [2024-11-21 05:13:20.221418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.221426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.706 [2024-11-21 05:13:20.221477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:03.706 [2024-11-21 05:13:20.221483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.706 [2024-11-21 05:13:20.221491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.706 [2024-11-21 05:13:20.221824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.134 ms, result 0 00:27:03.706 true 00:27:03.706 05:13:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91524 00:27:03.706 05:13:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91524 00:27:03.706 05:13:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:03.706 [2024-11-21 05:13:20.311029] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:27:03.706 [2024-11-21 05:13:20.311162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92282 ] 00:27:03.965 [2024-11-21 05:13:20.467653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.965 [2024-11-21 05:13:20.501879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.898  [2024-11-21T05:13:23.009Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-21T05:13:23.945Z] Copying: 513/1024 [MB] (257 MBps) [2024-11-21T05:13:24.882Z] Copying: 768/1024 [MB] (254 MBps) [2024-11-21T05:13:24.882Z] Copying: 1022/1024 [MB] (254 MBps) [2024-11-21T05:13:24.882Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:27:08.148 00:27:08.148 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91524 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:08.148 05:13:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:08.148 [2024-11-21 05:13:24.823905] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:27:08.148 [2024-11-21 05:13:24.824022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92328 ] 00:27:08.408 [2024-11-21 05:13:24.979526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.408 [2024-11-21 05:13:25.014019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.408 [2024-11-21 05:13:25.115293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:08.408 [2024-11-21 05:13:25.115525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:08.669 [2024-11-21 05:13:25.177446] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:08.669 [2024-11-21 05:13:25.177727] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:08.669 [2024-11-21 05:13:25.178018] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:08.669 [2024-11-21 05:13:25.358557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.358606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:08.669 [2024-11-21 05:13:25.358631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:08.669 [2024-11-21 05:13:25.358637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.358687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.358697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:08.669 [2024-11-21 05:13:25.358703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:08.669 [2024-11-21 05:13:25.358709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.358725] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:08.669 [2024-11-21 05:13:25.358919] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:08.669 [2024-11-21 05:13:25.358930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.358939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:08.669 [2024-11-21 05:13:25.358946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:27:08.669 [2024-11-21 05:13:25.358955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.360230] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:08.669 [2024-11-21 05:13:25.362795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.362824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:08.669 [2024-11-21 05:13:25.362833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:27:08.669 [2024-11-21 05:13:25.362844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.362887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.362895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:08.669 [2024-11-21 05:13:25.362904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:08.669 [2024-11-21 05:13:25.362910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.369214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.369352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:08.669 [2024-11-21 05:13:25.369366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.263 ms 00:27:08.669 [2024-11-21 05:13:25.369373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.369448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.369455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:08.669 [2024-11-21 05:13:25.369463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:08.669 [2024-11-21 05:13:25.369469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.369514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.369522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:08.669 [2024-11-21 05:13:25.369528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:08.669 [2024-11-21 05:13:25.369534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.369552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:08.669 [2024-11-21 05:13:25.371125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.371148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:08.669 [2024-11-21 05:13:25.371155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:27:08.669 [2024-11-21 05:13:25.371165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.371192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.371199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:08.669 [2024-11-21 05:13:25.371212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:08.669 [2024-11-21 05:13:25.371218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.669 [2024-11-21 05:13:25.371235] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:08.669 [2024-11-21 05:13:25.371252] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:08.669 [2024-11-21 05:13:25.371286] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:08.669 [2024-11-21 05:13:25.371303] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:08.669 [2024-11-21 05:13:25.371385] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:08.669 [2024-11-21 05:13:25.371393] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:08.669 [2024-11-21 05:13:25.371405] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:08.669 [2024-11-21 05:13:25.371414] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:08.669 [2024-11-21 05:13:25.371421] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:08.669 [2024-11-21 05:13:25.371427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:08.669 [2024-11-21 05:13:25.371432] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:08.669 [2024-11-21 05:13:25.371438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:08.669 [2024-11-21 05:13:25.371444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:08.669 [2024-11-21 05:13:25.371452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.669 [2024-11-21 05:13:25.371457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:08.670 [2024-11-21 05:13:25.371464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:27:08.670 [2024-11-21 05:13:25.371469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.670 [2024-11-21 05:13:25.371535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.670 [2024-11-21 05:13:25.371541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:08.670 [2024-11-21 05:13:25.371551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:08.670 [2024-11-21 05:13:25.371556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.670 [2024-11-21 05:13:25.371647] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:08.670 [2024-11-21 05:13:25.371664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:08.670 [2024-11-21 05:13:25.371671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:08.670 [2024-11-21 05:13:25.371688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:08.670 [2024-11-21 05:13:25.371705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:08.670 [2024-11-21 05:13:25.371716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:08.670 [2024-11-21 05:13:25.371722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:08.670 [2024-11-21 05:13:25.371730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:08.670 [2024-11-21 05:13:25.371736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:08.670 [2024-11-21 05:13:25.371741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:08.670 [2024-11-21 05:13:25.371747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:08.670 [2024-11-21 05:13:25.371757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:08.670 [2024-11-21 05:13:25.371775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:08.670 [2024-11-21 05:13:25.371793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:08.670 [2024-11-21 05:13:25.371811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:08.670 [2024-11-21 05:13:25.371834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:08.670 [2024-11-21 05:13:25.371851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:08.670 [2024-11-21 05:13:25.371862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:08.670 [2024-11-21 05:13:25.371868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:08.670 [2024-11-21 05:13:25.371874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:08.670 [2024-11-21 05:13:25.371880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:08.670 [2024-11-21 05:13:25.371886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:08.670 [2024-11-21 05:13:25.371892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:08.670 [2024-11-21 05:13:25.371903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:08.670 [2024-11-21 05:13:25.371909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371915] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:08.670 [2024-11-21 05:13:25.371923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:08.670 [2024-11-21 05:13:25.371929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.670 [2024-11-21 05:13:25.371945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:08.670 [2024-11-21 05:13:25.371951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:08.670 [2024-11-21 05:13:25.371957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:08.670 [2024-11-21 05:13:25.371963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:08.670 [2024-11-21 05:13:25.371973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:08.670 [2024-11-21 05:13:25.371979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:08.670 [2024-11-21 05:13:25.371986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:08.670 [2024-11-21 05:13:25.371994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.670 [2024-11-21 05:13:25.372001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:08.670 [2024-11-21 05:13:25.372008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:08.670 [2024-11-21 05:13:25.372014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:08.670 [2024-11-21 05:13:25.372021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:08.670 [2024-11-21 05:13:25.372027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:08.670 [2024-11-21 05:13:25.372035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:08.670 [2024-11-21 05:13:25.372041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:08.670 [2024-11-21 05:13:25.372048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:08.670 [2024-11-21 05:13:25.372054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:08.670 [2024-11-21 05:13:25.372061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:08.670 [2024-11-21 05:13:25.372067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:08.670 [2024-11-21 05:13:25.372073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:08.670 [2024-11-21 05:13:25.372079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:08.670 [2024-11-21 05:13:25.372086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:08.670 [2024-11-21 05:13:25.372092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:08.670 [2024-11-21 05:13:25.372100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.670 [2024-11-21 05:13:25.372110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:08.671 [2024-11-21 05:13:25.372116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:08.671 [2024-11-21 05:13:25.372122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:08.671 [2024-11-21 05:13:25.372129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:08.671 [2024-11-21 05:13:25.372136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.671 [2024-11-21 05:13:25.372144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:08.671 [2024-11-21 05:13:25.372151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:27:08.671 [2024-11-21 05:13:25.372160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.671 [2024-11-21 05:13:25.383741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.671 [2024-11-21 05:13:25.383846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:08.671 [2024-11-21 05:13:25.383893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.537 ms 00:27:08.671 [2024-11-21 05:13:25.383928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.671 [2024-11-21 05:13:25.384014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.671 [2024-11-21 05:13:25.384054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:08.671 [2024-11-21 05:13:25.384076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:08.671 [2024-11-21 05:13:25.384126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.407030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.407285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:08.930 [2024-11-21 05:13:25.407489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.838 ms 00:27:08.930 [2024-11-21 05:13:25.407555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.407732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.407796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:08.930 [2024-11-21 05:13:25.407918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:08.930 [2024-11-21 05:13:25.407968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.408640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.408819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:08.930 [2024-11-21 05:13:25.408930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:27:08.930 [2024-11-21 05:13:25.408985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.409397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.409532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:08.930 [2024-11-21 05:13:25.409658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:27:08.930 [2024-11-21 05:13:25.409755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.417202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.417228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:08.930 [2024-11-21 05:13:25.417236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.399 ms 00:27:08.930 [2024-11-21 05:13:25.417243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.419682] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:08.930 [2024-11-21 05:13:25.419713] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:08.930 [2024-11-21 05:13:25.419723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.419729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:08.930 [2024-11-21 05:13:25.419736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:27:08.930 [2024-11-21 05:13:25.419743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.431087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.431199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:08.930 [2024-11-21 05:13:25.431212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.310 ms 00:27:08.930 [2024-11-21 05:13:25.431218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.432757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.432784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:08.930 [2024-11-21 05:13:25.432792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:27:08.930 [2024-11-21 05:13:25.432798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.433999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.434025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:08.930 [2024-11-21 05:13:25.434033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:27:08.930 [2024-11-21 05:13:25.434039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.434315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.434329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:08.930 [2024-11-21 05:13:25.434336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:27:08.930 [2024-11-21 05:13:25.434342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.450110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.450262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:08.930 [2024-11-21 05:13:25.450279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.750 ms 00:27:08.930 [2024-11-21 05:13:25.450287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.456265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:08.930 [2024-11-21 05:13:25.458973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.458997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:08.930 [2024-11-21 05:13:25.459008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.651 ms 00:27:08.930 [2024-11-21 05:13:25.459015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.459074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.459083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:08.930 [2024-11-21 05:13:25.459090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:08.930 [2024-11-21 05:13:25.459098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.459189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.459197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:08.930 [2024-11-21 05:13:25.459204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:08.930 [2024-11-21 05:13:25.459210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.459229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.459237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:08.930 [2024-11-21 05:13:25.459243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:08.930 [2024-11-21 05:13:25.459249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.459279] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:08.930 [2024-11-21 05:13:25.459287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.459295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:08.930 [2024-11-21 05:13:25.459305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:08.930 [2024-11-21 05:13:25.459311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.462980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.463070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:08.930 [2024-11-21 05:13:25.463112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.651 ms 00:27:08.930 [2024-11-21 05:13:25.463129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.463202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.930 [2024-11-21 05:13:25.463222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:08.930 [2024-11-21 05:13:25.463255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:08.930 [2024-11-21 05:13:25.463277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.930 [2024-11-21 05:13:25.464212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.276 ms, result 0 00:27:09.873  [2024-11-21T05:13:27.685Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-21T05:13:28.646Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-21T05:13:29.580Z] Copying: 95/1024 [MB] (51 MBps) [2024-11-21T05:13:30.520Z] Copying: 147/1024 [MB] (52 MBps) [2024-11-21T05:13:31.906Z] Copying: 179/1024 [MB] (31 MBps) [2024-11-21T05:13:32.849Z] Copying: 198/1024 [MB] (18 MBps) [2024-11-21T05:13:33.791Z] Copying: 214/1024 [MB] (16 MBps) [2024-11-21T05:13:34.726Z] Copying: 235/1024 [MB] (20 MBps) [2024-11-21T05:13:35.671Z] Copying: 284/1024 [MB] (49 MBps) [2024-11-21T05:13:36.618Z] Copying: 305/1024 [MB] (21 MBps) [2024-11-21T05:13:37.558Z] Copying: 319/1024 [MB] (14 MBps) [2024-11-21T05:13:38.492Z] Copying: 350/1024 [MB] (31 MBps) [2024-11-21T05:13:39.877Z] Copying: 397/1024 [MB] (46 MBps) [2024-11-21T05:13:40.821Z] Copying: 418/1024 [MB] (20 MBps) [2024-11-21T05:13:41.765Z] Copying: 436/1024 [MB] (18 MBps) [2024-11-21T05:13:42.709Z] Copying: 454/1024 [MB] (17 MBps) [2024-11-21T05:13:43.653Z] Copying: 474/1024 [MB] (19 MBps) [2024-11-21T05:13:44.599Z] Copying: 491/1024 [MB] (16 MBps) [2024-11-21T05:13:45.542Z] Copying: 505/1024 [MB] (13 MBps) [2024-11-21T05:13:46.486Z] Copying: 527336/1048576 [kB] (10176 kBps) [2024-11-21T05:13:47.872Z] Copying: 537424/1048576 [kB] (10088 kBps) [2024-11-21T05:13:48.805Z] Copying: 547552/1048576 [kB] (10128 kBps) [2024-11-21T05:13:49.738Z] Copying: 578/1024 [MB] (43 MBps) [2024-11-21T05:13:50.674Z] Copying: 631/1024 [MB] (52 MBps) [2024-11-21T05:13:51.617Z] Copying: 684/1024 [MB] (52 MBps) [2024-11-21T05:13:52.561Z] Copying: 707/1024 [MB] (23 MBps) [2024-11-21T05:13:53.497Z] Copying: 718/1024 [MB] (10 MBps) [2024-11-21T05:13:54.882Z] Copying: 731/1024 [MB] (12 MBps) [2024-11-21T05:13:55.821Z] Copying: 747/1024 [MB] (15 MBps) [2024-11-21T05:13:56.761Z] Copying: 762/1024 [MB] (15 MBps) [2024-11-21T05:13:57.696Z] Copying: 781/1024 [MB] (19 MBps) [2024-11-21T05:13:58.628Z] Copying: 827/1024 [MB] (45 MBps) [2024-11-21T05:13:59.561Z] Copying: 880/1024 [MB] (53 MBps) [2024-11-21T05:14:00.493Z] Copying: 933/1024 [MB] (53 MBps) [2024-11-21T05:14:01.866Z] Copying: 974/1024 [MB] (40 MBps) [2024-11-21T05:14:02.434Z] Copying: 1023/1024 [MB] (48 MBps) [2024-11-21T05:14:02.434Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-21 05:14:02.234471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.700 [2024-11-21 05:14:02.234531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:45.700 [2024-11-21 05:14:02.234548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:45.700 [2024-11-21 05:14:02.234564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.700 [2024-11-21 05:14:02.235505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:45.700 [2024-11-21 05:14:02.237943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.700 [2024-11-21 05:14:02.237977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:45.700 [2024-11-21 05:14:02.237988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:27:45.700 [2024-11-21 05:14:02.237997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.700 [2024-11-21 05:14:02.251704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.700 [2024-11-21 05:14:02.251738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:45.700 [2024-11-21 05:14:02.251749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.144 ms 00:27:45.700 [2024-11-21 05:14:02.251757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.700 [2024-11-21 05:14:02.275258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.700 [2024-11-21 05:14:02.275295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:45.700 [2024-11-21 05:14:02.275306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.485 ms 00:27:45.700 [2024-11-21 05:14:02.275319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.700 [2024-11-21 05:14:02.281524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.700 [2024-11-21 05:14:02.281671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:45.700 [2024-11-21 05:14:02.281689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:27:45.700 [2024-11-21 05:14:02.281698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.700 [2024-11-21 05:14:02.284088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.700 [2024-11-21 05:14:02.284122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:45.700 [2024-11-21 05:14:02.284132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.350 ms 00:27:45.700 [2024-11-21 05:14:02.284139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.700 [2024-11-21 05:14:02.288644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.700 [2024-11-21 05:14:02.288768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:45.700 [2024-11-21 05:14:02.288784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.473 ms 00:27:45.700 [2024-11-21 05:14:02.288792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.963 [2024-11-21 05:14:02.562773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.963 [2024-11-21 05:14:02.562834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:45.963 [2024-11-21 05:14:02.562849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 273.943 ms 00:27:45.963 [2024-11-21 05:14:02.562868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.963 [2024-11-21 05:14:02.566308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.963 [2024-11-21 05:14:02.566359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:45.963 [2024-11-21 05:14:02.566370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.421 ms 00:27:45.963 [2024-11-21 05:14:02.566377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.963 [2024-11-21 05:14:02.569403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.963 [2024-11-21 05:14:02.569466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:45.963 [2024-11-21 05:14:02.569477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:27:45.963 [2024-11-21 05:14:02.569485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.963 [2024-11-21 05:14:02.571783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.963 [2024-11-21 05:14:02.571954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:45.963 [2024-11-21 05:14:02.571974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:27:45.963 [2024-11-21 05:14:02.571981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.963 [2024-11-21 05:14:02.573957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.963 [2024-11-21 05:14:02.574005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:45.963 [2024-11-21 05:14:02.574015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:27:45.963 [2024-11-21 05:14:02.574024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.963 [2024-11-21 05:14:02.574066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:45.963 [2024-11-21 05:14:02.574084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120832 / 261120 wr_cnt: 1 state: open 00:27:45.963 [2024-11-21 05:14:02.574110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:45.963 [2024-11-21 05:14:02.574661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:45.964 [2024-11-21 05:14:02.574949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:45.964 [2024-11-21 05:14:02.574962] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f7d698-2ac6-4548-b62b-f05d68202e0f 00:27:45.964 [2024-11-21 05:14:02.574971] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120832 00:27:45.964 [2024-11-21 05:14:02.574980] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121792 00:27:45.964 [2024-11-21 05:14:02.574989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120832 00:27:45.964 [2024-11-21 05:14:02.574999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:27:45.964 [2024-11-21 05:14:02.575007] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:45.964 [2024-11-21 05:14:02.575025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:45.964 [2024-11-21 05:14:02.575033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:45.964 [2024-11-21 05:14:02.575039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:45.964 [2024-11-21 05:14:02.575047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:45.964 [2024-11-21 05:14:02.575054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.964 [2024-11-21 05:14:02.575062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:45.964 [2024-11-21 05:14:02.575072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:27:45.964 [2024-11-21 05:14:02.575084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.578685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.964 [2024-11-21 05:14:02.578825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:45.964 [2024-11-21 05:14:02.578940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.576 ms 00:27:45.964 [2024-11-21 05:14:02.578971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.579144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.964 [2024-11-21 05:14:02.579177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:45.964 [2024-11-21 05:14:02.579247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:27:45.964 [2024-11-21 05:14:02.579270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.589325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.589498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:45.964 [2024-11-21 05:14:02.589556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.589580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.589688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.589842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:45.964 [2024-11-21 05:14:02.589874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.589895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.590058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.590126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:45.964 [2024-11-21 05:14:02.590171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.590196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.590285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.590349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:45.964 [2024-11-21 05:14:02.590405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.590429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.609783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.609985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:45.964 [2024-11-21 05:14:02.610044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.610069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.625584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.625657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.964 [2024-11-21 05:14:02.625670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.625680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.625752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.625764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.964 [2024-11-21 05:14:02.625774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.625783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.625824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.625834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.964 [2024-11-21 05:14:02.625844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.625858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.625948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.625961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.964 [2024-11-21 05:14:02.625976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.625986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.626020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.626031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:45.964 [2024-11-21 05:14:02.626040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.626048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.626103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.626114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.964 [2024-11-21 05:14:02.626124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.626133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.964 [2024-11-21 05:14:02.626191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:45.964 [2024-11-21 05:14:02.626204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.964 [2024-11-21 05:14:02.626214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:45.964 [2024-11-21 05:14:02.626228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.965 [2024-11-21 05:14:02.626398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.666 ms, result 0 00:27:47.353 00:27:47.353 00:27:47.353 05:14:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:49.270 05:14:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:49.531 [2024-11-21 05:14:06.045868] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:27:49.531 [2024-11-21 05:14:06.045995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92750 ] 00:27:49.531 [2024-11-21 05:14:06.207355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.531 [2024-11-21 05:14:06.247853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.792 [2024-11-21 05:14:06.396228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:49.792 [2024-11-21 05:14:06.396322] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:50.055 [2024-11-21 05:14:06.560780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.560842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:50.055 [2024-11-21 05:14:06.560859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:50.055 [2024-11-21 05:14:06.560868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.560934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.560946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:50.055 [2024-11-21 05:14:06.560955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:50.055 [2024-11-21 05:14:06.560964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.560995] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:50.055 [2024-11-21 05:14:06.561332] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:50.055 [2024-11-21 05:14:06.561353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.561369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:50.055 [2024-11-21 05:14:06.561379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:27:50.055 [2024-11-21 05:14:06.561391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.563713] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:50.055 [2024-11-21 05:14:06.568316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.568372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:50.055 [2024-11-21 05:14:06.568385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:27:50.055 [2024-11-21 05:14:06.568402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.568489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.568505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:50.055 [2024-11-21 05:14:06.568515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:50.055 [2024-11-21 05:14:06.568523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.580065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.580110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:50.055 [2024-11-21 05:14:06.580132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.496 ms 00:27:50.055 [2024-11-21 05:14:06.580141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.580256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.580272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:50.055 [2024-11-21 05:14:06.580286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:50.055 [2024-11-21 05:14:06.580294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.580355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.580371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:50.055 [2024-11-21 05:14:06.580380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:50.055 [2024-11-21 05:14:06.580387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.580419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:50.055 [2024-11-21 05:14:06.583088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.583286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:50.055 [2024-11-21 05:14:06.583314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:27:50.055 [2024-11-21 05:14:06.583323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.583365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.583373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:50.055 [2024-11-21 05:14:06.583383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:50.055 [2024-11-21 05:14:06.583398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.583425] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:50.055 [2024-11-21 05:14:06.583452] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:50.055 [2024-11-21 05:14:06.583492] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:50.055 [2024-11-21 05:14:06.583514] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:50.055 [2024-11-21 05:14:06.583652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:50.055 [2024-11-21 05:14:06.583665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:50.055 [2024-11-21 05:14:06.583677] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:50.055 [2024-11-21 05:14:06.583692] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:50.055 [2024-11-21 05:14:06.583702] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:50.055 [2024-11-21 05:14:06.583711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:50.055 [2024-11-21 05:14:06.583720] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:50.055 [2024-11-21 05:14:06.583728] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:50.055 [2024-11-21 05:14:06.583736] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:50.055 [2024-11-21 05:14:06.583744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.583756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:50.055 [2024-11-21 05:14:06.583768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:27:50.055 [2024-11-21 05:14:06.583776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.583861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.055 [2024-11-21 05:14:06.583874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:50.055 [2024-11-21 05:14:06.583887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:50.055 [2024-11-21 05:14:06.583896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.055 [2024-11-21 05:14:06.584005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:50.055 [2024-11-21 05:14:06.584017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:50.055 [2024-11-21 05:14:06.584028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:50.055 [2024-11-21 05:14:06.584037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.055 [2024-11-21 05:14:06.584046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:50.055 [2024-11-21 05:14:06.584065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:50.055 [2024-11-21 05:14:06.584074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:50.055 [2024-11-21 05:14:06.584083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:50.055 [2024-11-21 05:14:06.584091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:50.055 [2024-11-21 05:14:06.584102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:50.055 [2024-11-21 05:14:06.584110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:50.055 [2024-11-21 05:14:06.584118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:50.055 [2024-11-21 05:14:06.584126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:50.055 [2024-11-21 05:14:06.584134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:50.056 [2024-11-21 05:14:06.584142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:50.056 [2024-11-21 05:14:06.584150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:50.056 [2024-11-21 05:14:06.584170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:50.056 [2024-11-21 05:14:06.584179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:50.056 [2024-11-21 05:14:06.584196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.056 [2024-11-21 05:14:06.584213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:50.056 [2024-11-21 05:14:06.584220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.056 [2024-11-21 05:14:06.584237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:50.056 [2024-11-21 05:14:06.584244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.056 [2024-11-21 05:14:06.584259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:50.056 [2024-11-21 05:14:06.584265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:50.056 [2024-11-21 05:14:06.584280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:50.056 [2024-11-21 05:14:06.584288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:50.056 [2024-11-21 05:14:06.584302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:50.056 [2024-11-21 05:14:06.584309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:50.056 [2024-11-21 05:14:06.584317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:50.056 [2024-11-21 05:14:06.584326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:50.056 [2024-11-21 05:14:06.584336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:50.056 [2024-11-21 05:14:06.584343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:50.056 [2024-11-21 05:14:06.584360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:50.056 [2024-11-21 05:14:06.584367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584374] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:50.056 [2024-11-21 05:14:06.584385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:50.056 [2024-11-21 05:14:06.584396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:50.056 [2024-11-21 05:14:06.584403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:50.056 [2024-11-21 05:14:06.584411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:50.056 [2024-11-21 05:14:06.584418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:50.056 [2024-11-21 05:14:06.584431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:50.056 [2024-11-21 05:14:06.584438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:50.056 [2024-11-21 05:14:06.584445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:50.056 [2024-11-21 05:14:06.584452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:50.056 [2024-11-21 05:14:06.584460] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:50.056 [2024-11-21 05:14:06.584473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.056 [2024-11-21 05:14:06.584482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:50.056 [2024-11-21 05:14:06.584493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:50.056 [2024-11-21 05:14:06.584504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:50.056 [2024-11-21 05:14:06.584512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:50.056 [2024-11-21 05:14:06.584520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:50.056 [2024-11-21 05:14:06.584528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:50.056 [2024-11-21 05:14:06.584536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:50.056 [2024-11-21 05:14:06.584544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:50.056 [2024-11-21 05:14:06.584553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:50.056 [2024-11-21 05:14:06.584561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:50.056 [2024-11-21 05:14:06.584568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:50.056 [2024-11-21 05:14:06.584576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:50.056 [2024-11-21 05:14:06.584586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:50.056 [2024-11-21 05:14:06.584594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:50.056 [2024-11-21 05:14:06.584602] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:50.056 [2024-11-21 05:14:06.584627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.056 [2024-11-21 05:14:06.584636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:50.056 [2024-11-21 05:14:06.584645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:50.056 [2024-11-21 05:14:06.584654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:50.056 [2024-11-21 05:14:06.584662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:50.056 [2024-11-21 05:14:06.584670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.584678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:50.056 [2024-11-21 05:14:06.584690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:27:50.056 [2024-11-21 05:14:06.584702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.604869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.604918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:50.056 [2024-11-21 05:14:06.604931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.091 ms 00:27:50.056 [2024-11-21 05:14:06.604939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.605032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.605042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:50.056 [2024-11-21 05:14:06.605052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:50.056 [2024-11-21 05:14:06.605061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.631026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.631082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:50.056 [2024-11-21 05:14:06.631097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.901 ms 00:27:50.056 [2024-11-21 05:14:06.631107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.631165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.631176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:50.056 [2024-11-21 05:14:06.631193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:50.056 [2024-11-21 05:14:06.631203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.631990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.632024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:50.056 [2024-11-21 05:14:06.632038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:27:50.056 [2024-11-21 05:14:06.632048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.632222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.632241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:50.056 [2024-11-21 05:14:06.632252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:27:50.056 [2024-11-21 05:14:06.632260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.643243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.643297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:50.056 [2024-11-21 05:14:06.643317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.959 ms 00:27:50.056 [2024-11-21 05:14:06.643326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.056 [2024-11-21 05:14:06.648311] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:50.056 [2024-11-21 05:14:06.648364] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:50.056 [2024-11-21 05:14:06.648384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.056 [2024-11-21 05:14:06.648393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:50.056 [2024-11-21 05:14:06.648403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.943 ms 00:27:50.056 [2024-11-21 05:14:06.648410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.664440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.664500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:50.057 [2024-11-21 05:14:06.664513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.972 ms 00:27:50.057 [2024-11-21 05:14:06.664521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.667398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.667580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:50.057 [2024-11-21 05:14:06.667598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.838 ms 00:27:50.057 [2024-11-21 05:14:06.667621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.670273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.670320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:50.057 [2024-11-21 05:14:06.670331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:27:50.057 [2024-11-21 05:14:06.670347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.670849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.670901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:50.057 [2024-11-21 05:14:06.670925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:27:50.057 [2024-11-21 05:14:06.670944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.700014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.700242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:50.057 [2024-11-21 05:14:06.700264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.976 ms 00:27:50.057 [2024-11-21 05:14:06.700288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.709147] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:50.057 [2024-11-21 05:14:06.712911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.712966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:50.057 [2024-11-21 05:14:06.712980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.316 ms 00:27:50.057 [2024-11-21 05:14:06.712990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.713092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.713111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:50.057 [2024-11-21 05:14:06.713126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:50.057 [2024-11-21 05:14:06.713134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.715585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.715648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:50.057 [2024-11-21 05:14:06.715668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.407 ms 00:27:50.057 [2024-11-21 05:14:06.715677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.715720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.715730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:50.057 [2024-11-21 05:14:06.715748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:50.057 [2024-11-21 05:14:06.715757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.715801] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:50.057 [2024-11-21 05:14:06.715813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.715823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:50.057 [2024-11-21 05:14:06.715833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:50.057 [2024-11-21 05:14:06.715848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.722514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.722565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:50.057 [2024-11-21 05:14:06.722576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.645 ms 00:27:50.057 [2024-11-21 05:14:06.722586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.722714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:50.057 [2024-11-21 05:14:06.722735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:50.057 [2024-11-21 05:14:06.722745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:50.057 [2024-11-21 05:14:06.722754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:50.057 [2024-11-21 05:14:06.724352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.019 ms, result 0 00:27:51.441  [2024-11-21T05:14:09.116Z] Copying: 1044/1048576 [kB] (1044 kBps) [2024-11-21T05:14:10.074Z] Copying: 5552/1048576 [kB] (4508 kBps) [2024-11-21T05:14:11.090Z] Copying: 30/1024 [MB] (25 MBps) [2024-11-21T05:14:12.081Z] Copying: 56/1024 [MB] (25 MBps) [2024-11-21T05:14:13.020Z] Copying: 83/1024 [MB] (27 MBps) [2024-11-21T05:14:13.964Z] Copying: 119/1024 [MB] (35 MBps) [2024-11-21T05:14:15.346Z] Copying: 148/1024 [MB] (28 MBps) [2024-11-21T05:14:16.285Z] Copying: 177/1024 [MB] (29 MBps) [2024-11-21T05:14:17.227Z] Copying: 208/1024 [MB] (31 MBps) [2024-11-21T05:14:18.168Z] Copying: 238/1024 [MB] (29 MBps) [2024-11-21T05:14:19.112Z] Copying: 268/1024 [MB] (30 MBps) [2024-11-21T05:14:20.051Z] Copying: 298/1024 [MB] (30 MBps) [2024-11-21T05:14:20.992Z] Copying: 328/1024 [MB] (29 MBps) [2024-11-21T05:14:21.938Z] Copying: 357/1024 [MB] (29 MBps) [2024-11-21T05:14:23.325Z] Copying: 387/1024 [MB] (29 MBps) [2024-11-21T05:14:24.271Z] Copying: 417/1024 [MB] (30 MBps) [2024-11-21T05:14:25.211Z] Copying: 442/1024 [MB] (24 MBps) [2024-11-21T05:14:26.152Z] Copying: 468/1024 [MB] (25 MBps) [2024-11-21T05:14:27.098Z] Copying: 498/1024 [MB] (30 MBps) [2024-11-21T05:14:28.041Z] Copying: 517/1024 [MB] (19 MBps) [2024-11-21T05:14:28.985Z] Copying: 547/1024 [MB] (29 MBps) [2024-11-21T05:14:29.925Z] Copying: 576/1024 [MB] (28 MBps) [2024-11-21T05:14:31.310Z] Copying: 612/1024 [MB] (36 MBps) [2024-11-21T05:14:32.255Z] Copying: 636/1024 [MB] (23 MBps) [2024-11-21T05:14:33.199Z] Copying: 666/1024 [MB] (30 MBps) [2024-11-21T05:14:34.144Z] Copying: 696/1024 [MB] (30 MBps) [2024-11-21T05:14:35.085Z] Copying: 726/1024 [MB] (29 MBps) [2024-11-21T05:14:36.031Z] Copying: 762/1024 [MB] (36 MBps) [2024-11-21T05:14:36.976Z] Copying: 792/1024 [MB] (29 MBps) [2024-11-21T05:14:38.001Z] Copying: 820/1024 [MB] (27 MBps) [2024-11-21T05:14:38.945Z] Copying: 848/1024 [MB] (27 MBps) [2024-11-21T05:14:40.329Z] Copying: 879/1024 [MB] (31 MBps) [2024-11-21T05:14:41.273Z] Copying: 907/1024 [MB] (28 MBps) [2024-11-21T05:14:42.217Z] Copying: 936/1024 [MB] (29 MBps) [2024-11-21T05:14:43.160Z] Copying: 959/1024 [MB] (23 MBps) [2024-11-21T05:14:44.104Z] Copying: 994/1024 [MB] (35 MBps) [2024-11-21T05:14:44.104Z] Copying: 1021/1024 [MB] (27 MBps) [2024-11-21T05:14:44.672Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-21 05:14:44.368514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.368596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:27.938 [2024-11-21 05:14:44.368629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:27.938 [2024-11-21 05:14:44.368640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.368667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:27.938 [2024-11-21 05:14:44.369569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.369642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:27.938 [2024-11-21 05:14:44.369663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:28:27.938 [2024-11-21 05:14:44.369747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.370002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.370012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:27.938 [2024-11-21 05:14:44.370022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:28:27.938 [2024-11-21 05:14:44.370030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.390112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.390332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:27.938 [2024-11-21 05:14:44.390414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.054 ms 00:28:27.938 [2024-11-21 05:14:44.390460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.397272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.397446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:27.938 [2024-11-21 05:14:44.397514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.638 ms 00:28:27.938 [2024-11-21 05:14:44.397554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.401123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.401307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:27.938 [2024-11-21 05:14:44.401373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:28:27.938 [2024-11-21 05:14:44.401399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.406834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.407031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:27.938 [2024-11-21 05:14:44.407504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.379 ms 00:28:27.938 [2024-11-21 05:14:44.407561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.412526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.412709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:27.938 [2024-11-21 05:14:44.412980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:28:27.938 [2024-11-21 05:14:44.413025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.416428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.416625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:27.938 [2024-11-21 05:14:44.416644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:28:27.938 [2024-11-21 05:14:44.416654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.419728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.419929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:27.938 [2024-11-21 05:14:44.419951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:28:27.938 [2024-11-21 05:14:44.419960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.938 [2024-11-21 05:14:44.422212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.938 [2024-11-21 05:14:44.422265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:27.939 [2024-11-21 05:14:44.422276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:28:27.939 [2024-11-21 05:14:44.422284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.424468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.939 [2024-11-21 05:14:44.424520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:27.939 [2024-11-21 05:14:44.424530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:28:27.939 [2024-11-21 05:14:44.424538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.424581] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:27.939 [2024-11-21 05:14:44.424601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:27.939 [2024-11-21 05:14:44.424633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:27.939 [2024-11-21 05:14:44.424644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.424995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:27.939 [2024-11-21 05:14:44.425504] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:27.939 [2024-11-21 05:14:44.425513] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f7d698-2ac6-4548-b62b-f05d68202e0f 00:28:27.939 [2024-11-21 05:14:44.425534] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:27.939 [2024-11-21 05:14:44.425542] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 143808 00:28:27.939 [2024-11-21 05:14:44.425557] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 141824 00:28:27.939 [2024-11-21 05:14:44.425567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0140 00:28:27.939 [2024-11-21 05:14:44.425575] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:27.939 [2024-11-21 05:14:44.425584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:27.939 [2024-11-21 05:14:44.425592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:27.939 [2024-11-21 05:14:44.425599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:27.939 [2024-11-21 05:14:44.425619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:27.939 [2024-11-21 05:14:44.425628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.939 [2024-11-21 05:14:44.425644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:27.939 [2024-11-21 05:14:44.425652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:28:27.939 [2024-11-21 05:14:44.425661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.428921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.939 [2024-11-21 05:14:44.428966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:27.939 [2024-11-21 05:14:44.428978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:28:27.939 [2024-11-21 05:14:44.428987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.429146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:27.939 [2024-11-21 05:14:44.429156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:27.939 [2024-11-21 05:14:44.429173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:28:27.939 [2024-11-21 05:14:44.429181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.439847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.440015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:27.939 [2024-11-21 05:14:44.440078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.440101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.440214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.440237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:27.939 [2024-11-21 05:14:44.440357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.440377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.440877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.441024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:27.939 [2024-11-21 05:14:44.441218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.441305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.441353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.441399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:27.939 [2024-11-21 05:14:44.441423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.441462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.461691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.461894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:27.939 [2024-11-21 05:14:44.461955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.461980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.478210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.478401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:27.939 [2024-11-21 05:14:44.478464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.478503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.478585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.478630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:27.939 [2024-11-21 05:14:44.478656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.478676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.478765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.939 [2024-11-21 05:14:44.478829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:27.939 [2024-11-21 05:14:44.478851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.939 [2024-11-21 05:14:44.478873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.939 [2024-11-21 05:14:44.478990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.940 [2024-11-21 05:14:44.479020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:27.940 [2024-11-21 05:14:44.479033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.940 [2024-11-21 05:14:44.479043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.940 [2024-11-21 05:14:44.479082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.940 [2024-11-21 05:14:44.479094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:27.940 [2024-11-21 05:14:44.479103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.940 [2024-11-21 05:14:44.479114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.940 [2024-11-21 05:14:44.479173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.940 [2024-11-21 05:14:44.479185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:27.940 [2024-11-21 05:14:44.479195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.940 [2024-11-21 05:14:44.479206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.940 [2024-11-21 05:14:44.479270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:27.940 [2024-11-21 05:14:44.479283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:27.940 [2024-11-21 05:14:44.479292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:27.940 [2024-11-21 05:14:44.479302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:27.940 [2024-11-21 05:14:44.479476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 110.907 ms, result 0 00:28:28.200 00:28:28.200 00:28:28.200 05:14:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:30.741 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:30.741 05:14:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:30.741 [2024-11-21 05:14:47.092913] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:28:30.741 [2024-11-21 05:14:47.093021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93168 ] 00:28:30.741 [2024-11-21 05:14:47.247966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.741 [2024-11-21 05:14:47.276752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.741 [2024-11-21 05:14:47.410485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.741 [2024-11-21 05:14:47.410586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.003 [2024-11-21 05:14:47.575021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.575093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:31.003 [2024-11-21 05:14:47.575111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:31.003 [2024-11-21 05:14:47.575121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.575194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.575207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:31.003 [2024-11-21 05:14:47.575217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:31.003 [2024-11-21 05:14:47.575226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.575259] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:31.003 [2024-11-21 05:14:47.575560] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:31.003 [2024-11-21 05:14:47.575579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.575588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:31.003 [2024-11-21 05:14:47.575598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:28:31.003 [2024-11-21 05:14:47.575638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.577993] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:31.003 [2024-11-21 05:14:47.582806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.582862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:31.003 [2024-11-21 05:14:47.582874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.815 ms 00:28:31.003 [2024-11-21 05:14:47.582890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.582977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.582988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:31.003 [2024-11-21 05:14:47.582997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:31.003 [2024-11-21 05:14:47.583005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.595027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.595075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:31.003 [2024-11-21 05:14:47.595095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.968 ms 00:28:31.003 [2024-11-21 05:14:47.595105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.595234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.595246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:31.003 [2024-11-21 05:14:47.595259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:28:31.003 [2024-11-21 05:14:47.595267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.595327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.595338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:31.003 [2024-11-21 05:14:47.595348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:31.003 [2024-11-21 05:14:47.595361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.595397] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:31.003 [2024-11-21 05:14:47.598144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.598190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:31.003 [2024-11-21 05:14:47.598202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.759 ms 00:28:31.003 [2024-11-21 05:14:47.598213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.598257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.003 [2024-11-21 05:14:47.598268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:31.003 [2024-11-21 05:14:47.598284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:31.003 [2024-11-21 05:14:47.598293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.003 [2024-11-21 05:14:47.598322] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:31.003 [2024-11-21 05:14:47.598348] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:31.003 [2024-11-21 05:14:47.598392] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:31.003 [2024-11-21 05:14:47.598417] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:31.003 [2024-11-21 05:14:47.598531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:31.003 [2024-11-21 05:14:47.598544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:31.003 [2024-11-21 05:14:47.598555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:31.003 [2024-11-21 05:14:47.598570] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:31.003 [2024-11-21 05:14:47.598580] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:31.003 [2024-11-21 05:14:47.598591] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:31.004 [2024-11-21 05:14:47.598599] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:31.004 [2024-11-21 05:14:47.598607] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:31.004 [2024-11-21 05:14:47.598629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:31.004 [2024-11-21 05:14:47.598638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.004 [2024-11-21 05:14:47.598650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:31.004 [2024-11-21 05:14:47.598661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:28:31.004 [2024-11-21 05:14:47.598669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.004 [2024-11-21 05:14:47.598753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.004 [2024-11-21 05:14:47.598771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:31.004 [2024-11-21 05:14:47.598780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:31.004 [2024-11-21 05:14:47.598792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.004 [2024-11-21 05:14:47.598902] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:31.004 [2024-11-21 05:14:47.598925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:31.004 [2024-11-21 05:14:47.598936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.004 [2024-11-21 05:14:47.598945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.004 [2024-11-21 05:14:47.598955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:31.004 [2024-11-21 05:14:47.598971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:31.004 [2024-11-21 05:14:47.598981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:31.004 [2024-11-21 05:14:47.598989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:31.004 [2024-11-21 05:14:47.598998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.004 [2024-11-21 05:14:47.599018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:31.004 [2024-11-21 05:14:47.599026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:31.004 [2024-11-21 05:14:47.599034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.004 [2024-11-21 05:14:47.599043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:31.004 [2024-11-21 05:14:47.599051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:31.004 [2024-11-21 05:14:47.599059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:31.004 [2024-11-21 05:14:47.599075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:31.004 [2024-11-21 05:14:47.599083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:31.004 [2024-11-21 05:14:47.599099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.004 [2024-11-21 05:14:47.599115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:31.004 [2024-11-21 05:14:47.599123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.004 [2024-11-21 05:14:47.599147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:31.004 [2024-11-21 05:14:47.599158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.004 [2024-11-21 05:14:47.599177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:31.004 [2024-11-21 05:14:47.599184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.004 [2024-11-21 05:14:47.599199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:31.004 [2024-11-21 05:14:47.599206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.004 [2024-11-21 05:14:47.599219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:31.004 [2024-11-21 05:14:47.599226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:31.004 [2024-11-21 05:14:47.599233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.004 [2024-11-21 05:14:47.599242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:31.004 [2024-11-21 05:14:47.599249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:31.004 [2024-11-21 05:14:47.599256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:31.004 [2024-11-21 05:14:47.599272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:31.004 [2024-11-21 05:14:47.599279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599285] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:31.004 [2024-11-21 05:14:47.599294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:31.004 [2024-11-21 05:14:47.599309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.004 [2024-11-21 05:14:47.599317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.004 [2024-11-21 05:14:47.599325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:31.004 [2024-11-21 05:14:47.599332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:31.004 [2024-11-21 05:14:47.599339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:31.004 [2024-11-21 05:14:47.599346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:31.004 [2024-11-21 05:14:47.599353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:31.004 [2024-11-21 05:14:47.599359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:31.004 [2024-11-21 05:14:47.599368] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:31.004 [2024-11-21 05:14:47.599378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.004 [2024-11-21 05:14:47.599387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:31.004 [2024-11-21 05:14:47.599395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:31.004 [2024-11-21 05:14:47.599406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:31.004 [2024-11-21 05:14:47.599415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:31.004 [2024-11-21 05:14:47.599424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:31.004 [2024-11-21 05:14:47.599432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:31.004 [2024-11-21 05:14:47.599441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:31.004 [2024-11-21 05:14:47.599448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:31.004 [2024-11-21 05:14:47.599455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:31.004 [2024-11-21 05:14:47.599463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:31.005 [2024-11-21 05:14:47.599470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:31.005 [2024-11-21 05:14:47.599477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:31.005 [2024-11-21 05:14:47.599485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:31.005 [2024-11-21 05:14:47.599493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:31.005 [2024-11-21 05:14:47.599500] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:31.005 [2024-11-21 05:14:47.599513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.005 [2024-11-21 05:14:47.599522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:31.005 [2024-11-21 05:14:47.599530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:31.005 [2024-11-21 05:14:47.599541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:31.005 [2024-11-21 05:14:47.599549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:31.005 [2024-11-21 05:14:47.599556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.599564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:31.005 [2024-11-21 05:14:47.599572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:28:31.005 [2024-11-21 05:14:47.599580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.620716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.620923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:31.005 [2024-11-21 05:14:47.620944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.052 ms 00:28:31.005 [2024-11-21 05:14:47.620953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.621061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.621071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:31.005 [2024-11-21 05:14:47.621091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:31.005 [2024-11-21 05:14:47.621100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.645182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.645272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:31.005 [2024-11-21 05:14:47.645288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.012 ms 00:28:31.005 [2024-11-21 05:14:47.645299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.645357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.645369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:31.005 [2024-11-21 05:14:47.645386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:31.005 [2024-11-21 05:14:47.645395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.646195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.646222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:31.005 [2024-11-21 05:14:47.646235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:28:31.005 [2024-11-21 05:14:47.646244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.646422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.646432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:31.005 [2024-11-21 05:14:47.646442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:28:31.005 [2024-11-21 05:14:47.646451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.658011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.658053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:31.005 [2024-11-21 05:14:47.658076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.536 ms 00:28:31.005 [2024-11-21 05:14:47.658086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.662600] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:31.005 [2024-11-21 05:14:47.662662] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:31.005 [2024-11-21 05:14:47.662677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.662688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:31.005 [2024-11-21 05:14:47.662700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.464 ms 00:28:31.005 [2024-11-21 05:14:47.662709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.679081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.679137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:31.005 [2024-11-21 05:14:47.679149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.311 ms 00:28:31.005 [2024-11-21 05:14:47.679167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.682208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.682251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:31.005 [2024-11-21 05:14:47.682262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:28:31.005 [2024-11-21 05:14:47.682272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.684678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.684720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:31.005 [2024-11-21 05:14:47.684742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.355 ms 00:28:31.005 [2024-11-21 05:14:47.684750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.685156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.685185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:31.005 [2024-11-21 05:14:47.685196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:28:31.005 [2024-11-21 05:14:47.685231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.715532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.715623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:31.005 [2024-11-21 05:14:47.715640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.268 ms 00:28:31.005 [2024-11-21 05:14:47.715650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.724739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:31.005 [2024-11-21 05:14:47.728716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.728762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:31.005 [2024-11-21 05:14:47.728778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.991 ms 00:28:31.005 [2024-11-21 05:14:47.728789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.728894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.728907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:31.005 [2024-11-21 05:14:47.728918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:31.005 [2024-11-21 05:14:47.728930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.730112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.730160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:31.005 [2024-11-21 05:14:47.730177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:28:31.005 [2024-11-21 05:14:47.730187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.730235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.005 [2024-11-21 05:14:47.730253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:31.005 [2024-11-21 05:14:47.730264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:31.005 [2024-11-21 05:14:47.730273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.005 [2024-11-21 05:14:47.730326] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:31.006 [2024-11-21 05:14:47.730342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.006 [2024-11-21 05:14:47.730352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:31.006 [2024-11-21 05:14:47.730362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:31.006 [2024-11-21 05:14:47.730374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.266 [2024-11-21 05:14:47.737431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.266 [2024-11-21 05:14:47.737482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:31.266 [2024-11-21 05:14:47.737496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.034 ms 00:28:31.266 [2024-11-21 05:14:47.737506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.266 [2024-11-21 05:14:47.737626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.266 [2024-11-21 05:14:47.737639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:31.266 [2024-11-21 05:14:47.737649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:28:31.266 [2024-11-21 05:14:47.737659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.266 [2024-11-21 05:14:47.739375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.528 ms, result 0 00:28:32.212  [2024-11-21T05:14:50.333Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-21T05:14:51.277Z] Copying: 35/1024 [MB] (16 MBps) [2024-11-21T05:14:52.221Z] Copying: 54/1024 [MB] (18 MBps) [2024-11-21T05:14:53.168Z] Copying: 74/1024 [MB] (20 MBps) [2024-11-21T05:14:54.115Z] Copying: 96/1024 [MB] (21 MBps) [2024-11-21T05:14:55.060Z] Copying: 117/1024 [MB] (21 MBps) [2024-11-21T05:14:56.004Z] Copying: 134/1024 [MB] (16 MBps) [2024-11-21T05:14:56.948Z] Copying: 157/1024 [MB] (23 MBps) [2024-11-21T05:14:58.334Z] Copying: 176/1024 [MB] (18 MBps) [2024-11-21T05:14:59.275Z] Copying: 197/1024 [MB] (20 MBps) [2024-11-21T05:15:00.217Z] Copying: 221/1024 [MB] (23 MBps) [2024-11-21T05:15:01.162Z] Copying: 246/1024 [MB] (24 MBps) [2024-11-21T05:15:02.107Z] Copying: 265/1024 [MB] (18 MBps) [2024-11-21T05:15:03.054Z] Copying: 283/1024 [MB] (18 MBps) [2024-11-21T05:15:03.998Z] Copying: 295/1024 [MB] (12 MBps) [2024-11-21T05:15:04.939Z] Copying: 309/1024 [MB] (14 MBps) [2024-11-21T05:15:06.327Z] Copying: 328/1024 [MB] (18 MBps) [2024-11-21T05:15:07.274Z] Copying: 358/1024 [MB] (29 MBps) [2024-11-21T05:15:08.218Z] Copying: 383/1024 [MB] (25 MBps) [2024-11-21T05:15:09.162Z] Copying: 405/1024 [MB] (21 MBps) [2024-11-21T05:15:10.098Z] Copying: 421/1024 [MB] (16 MBps) [2024-11-21T05:15:11.041Z] Copying: 448/1024 [MB] (26 MBps) [2024-11-21T05:15:12.038Z] Copying: 468/1024 [MB] (20 MBps) [2024-11-21T05:15:12.982Z] Copying: 486/1024 [MB] (17 MBps) [2024-11-21T05:15:13.926Z] Copying: 503/1024 [MB] (17 MBps) [2024-11-21T05:15:15.317Z] Copying: 517/1024 [MB] (13 MBps) [2024-11-21T05:15:16.263Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-21T05:15:17.208Z] Copying: 538/1024 [MB] (10 MBps) [2024-11-21T05:15:18.154Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-21T05:15:19.097Z] Copying: 559/1024 [MB] (10 MBps) [2024-11-21T05:15:20.046Z] Copying: 578/1024 [MB] (19 MBps) [2024-11-21T05:15:20.991Z] Copying: 589/1024 [MB] (10 MBps) [2024-11-21T05:15:21.936Z] Copying: 603/1024 [MB] (14 MBps) [2024-11-21T05:15:23.326Z] Copying: 618/1024 [MB] (15 MBps) [2024-11-21T05:15:24.270Z] Copying: 633/1024 [MB] (14 MBps) [2024-11-21T05:15:25.220Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-21T05:15:26.165Z] Copying: 655/1024 [MB] (12 MBps) [2024-11-21T05:15:27.113Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-21T05:15:28.058Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-21T05:15:29.004Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-21T05:15:29.946Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-21T05:15:31.335Z] Copying: 712/1024 [MB] (13 MBps) [2024-11-21T05:15:32.281Z] Copying: 723/1024 [MB] (10 MBps) [2024-11-21T05:15:33.227Z] Copying: 734/1024 [MB] (10 MBps) [2024-11-21T05:15:34.173Z] Copying: 744/1024 [MB] (10 MBps) [2024-11-21T05:15:35.120Z] Copying: 757/1024 [MB] (12 MBps) [2024-11-21T05:15:36.068Z] Copying: 774/1024 [MB] (17 MBps) [2024-11-21T05:15:37.014Z] Copying: 785/1024 [MB] (10 MBps) [2024-11-21T05:15:37.960Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-21T05:15:39.364Z] Copying: 817/1024 [MB] (20 MBps) [2024-11-21T05:15:39.938Z] Copying: 836/1024 [MB] (19 MBps) [2024-11-21T05:15:41.325Z] Copying: 850/1024 [MB] (13 MBps) [2024-11-21T05:15:42.271Z] Copying: 871/1024 [MB] (21 MBps) [2024-11-21T05:15:43.262Z] Copying: 889/1024 [MB] (17 MBps) [2024-11-21T05:15:44.216Z] Copying: 900/1024 [MB] (10 MBps) [2024-11-21T05:15:45.160Z] Copying: 919/1024 [MB] (19 MBps) [2024-11-21T05:15:46.099Z] Copying: 934/1024 [MB] (15 MBps) [2024-11-21T05:15:47.039Z] Copying: 954/1024 [MB] (20 MBps) [2024-11-21T05:15:47.982Z] Copying: 974/1024 [MB] (19 MBps) [2024-11-21T05:15:49.364Z] Copying: 997/1024 [MB] (22 MBps) [2024-11-21T05:15:49.364Z] Copying: 1017/1024 [MB] (20 MBps) [2024-11-21T05:15:49.627Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-21 05:15:49.473729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.474217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:32.893 [2024-11-21 05:15:49.474392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:32.893 [2024-11-21 05:15:49.474431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.474481] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:32.893 [2024-11-21 05:15:49.476015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.476082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:32.893 [2024-11-21 05:15:49.476118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:29:32.893 [2024-11-21 05:15:49.476148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.476601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.476748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:32.893 [2024-11-21 05:15:49.476832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:29:32.893 [2024-11-21 05:15:49.476867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.482227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.482393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:32.893 [2024-11-21 05:15:49.482593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.303 ms 00:29:32.893 [2024-11-21 05:15:49.482665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.489957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.490114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:32.893 [2024-11-21 05:15:49.490134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.175 ms 00:29:32.893 [2024-11-21 05:15:49.490144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.493351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.493398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:32.893 [2024-11-21 05:15:49.493411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.121 ms 00:29:32.893 [2024-11-21 05:15:49.493419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.498375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.498422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:32.893 [2024-11-21 05:15:49.498433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.908 ms 00:29:32.893 [2024-11-21 05:15:49.498443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.500913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.500952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:32.893 [2024-11-21 05:15:49.500964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:29:32.893 [2024-11-21 05:15:49.500972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.503434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.503476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:32.893 [2024-11-21 05:15:49.503487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.426 ms 00:29:32.893 [2024-11-21 05:15:49.503494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.505675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.505713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:32.893 [2024-11-21 05:15:49.505724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:29:32.893 [2024-11-21 05:15:49.505732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.507356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.507396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:32.893 [2024-11-21 05:15:49.507406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:29:32.893 [2024-11-21 05:15:49.507415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.509347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.893 [2024-11-21 05:15:49.509386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:32.893 [2024-11-21 05:15:49.509396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:29:32.893 [2024-11-21 05:15:49.509404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.893 [2024-11-21 05:15:49.509445] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:32.893 [2024-11-21 05:15:49.509463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:32.893 [2024-11-21 05:15:49.509475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:32.893 [2024-11-21 05:15:49.509484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:32.893 [2024-11-21 05:15:49.509695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.509995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:32.894 [2024-11-21 05:15:49.510294] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:32.894 [2024-11-21 05:15:49.510304] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03f7d698-2ac6-4548-b62b-f05d68202e0f 00:29:32.894 [2024-11-21 05:15:49.510313] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:32.894 [2024-11-21 05:15:49.510322] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:32.894 [2024-11-21 05:15:49.510330] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:32.894 [2024-11-21 05:15:49.510350] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:32.894 [2024-11-21 05:15:49.510358] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:32.894 [2024-11-21 05:15:49.510367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:32.894 [2024-11-21 05:15:49.510375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:32.894 [2024-11-21 05:15:49.510382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:32.894 [2024-11-21 05:15:49.510389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:32.894 [2024-11-21 05:15:49.510406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-11-21 05:15:49.510423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:32.894 [2024-11-21 05:15:49.510437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:29:32.894 [2024-11-21 05:15:49.510446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-11-21 05:15:49.513572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-11-21 05:15:49.513627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:32.894 [2024-11-21 05:15:49.513649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:29:32.894 [2024-11-21 05:15:49.513658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.894 [2024-11-21 05:15:49.513825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.894 [2024-11-21 05:15:49.513844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:32.894 [2024-11-21 05:15:49.513854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:29:32.895 [2024-11-21 05:15:49.513863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.523841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.523896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:32.895 [2024-11-21 05:15:49.523907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.523921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.523982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.523992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:32.895 [2024-11-21 05:15:49.524001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.524010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.524071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.524084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:32.895 [2024-11-21 05:15:49.524093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.524101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.524123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.524132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:32.895 [2024-11-21 05:15:49.524141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.524148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.543243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.543312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:32.895 [2024-11-21 05:15:49.543325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.543334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.558806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.558863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:32.895 [2024-11-21 05:15:49.558876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.558897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.558959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.558970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:32.895 [2024-11-21 05:15:49.558979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.558989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.559035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.559055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:32.895 [2024-11-21 05:15:49.559064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.559074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.559157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.559169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:32.895 [2024-11-21 05:15:49.559179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.559187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.559230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.559241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:32.895 [2024-11-21 05:15:49.559254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.559263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.559314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.559325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:32.895 [2024-11-21 05:15:49.559335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.559344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.559400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.895 [2024-11-21 05:15:49.559416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:32.895 [2024-11-21 05:15:49.559426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.895 [2024-11-21 05:15:49.559436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.895 [2024-11-21 05:15:49.559636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.853 ms, result 0 00:29:33.156 00:29:33.156 00:29:33.156 05:15:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:35.700 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:35.700 Process with pid 91524 is not found 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91524 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91524 ']' 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91524 00:29:35.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91524) - No such process 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91524 is not found' 00:29:35.700 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:35.962 Remove shared memory files 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:35.962 00:29:35.962 real 3m39.429s 00:29:35.962 user 4m5.554s 00:29:35.962 sys 0m27.559s 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.962 ************************************ 00:29:35.962 05:15:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:35.962 END TEST ftl_dirty_shutdown 00:29:35.962 ************************************ 00:29:36.223 05:15:52 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:36.223 05:15:52 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:36.223 05:15:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.223 05:15:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:36.223 ************************************ 00:29:36.223 START TEST ftl_upgrade_shutdown 00:29:36.223 ************************************ 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:36.223 * Looking for test storage... 00:29:36.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.223 --rc genhtml_branch_coverage=1 00:29:36.223 --rc genhtml_function_coverage=1 00:29:36.223 --rc genhtml_legend=1 00:29:36.223 --rc geninfo_all_blocks=1 00:29:36.223 --rc geninfo_unexecuted_blocks=1 00:29:36.223 00:29:36.223 ' 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.223 --rc genhtml_branch_coverage=1 00:29:36.223 --rc genhtml_function_coverage=1 00:29:36.223 --rc genhtml_legend=1 00:29:36.223 --rc geninfo_all_blocks=1 00:29:36.223 --rc geninfo_unexecuted_blocks=1 00:29:36.223 00:29:36.223 ' 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.223 --rc genhtml_branch_coverage=1 00:29:36.223 --rc genhtml_function_coverage=1 00:29:36.223 --rc genhtml_legend=1 00:29:36.223 --rc geninfo_all_blocks=1 00:29:36.223 --rc geninfo_unexecuted_blocks=1 00:29:36.223 00:29:36.223 ' 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:36.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.223 --rc genhtml_branch_coverage=1 00:29:36.223 --rc genhtml_function_coverage=1 00:29:36.223 --rc genhtml_legend=1 00:29:36.223 --rc geninfo_all_blocks=1 00:29:36.223 --rc geninfo_unexecuted_blocks=1 00:29:36.223 00:29:36.223 ' 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:36.223 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93902 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93902 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93902 ']' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.224 05:15:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:36.484 [2024-11-21 05:15:52.985711] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:29:36.484 [2024-11-21 05:15:52.985863] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93902 ] 00:29:36.484 [2024-11-21 05:15:53.147960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.484 [2024-11-21 05:15:53.175328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.056 05:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.057 05:15:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:37.057 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:37.057 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:37.057 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:37.057 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.057 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:37.318 05:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:37.580 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:37.580 { 00:29:37.580 "name": "basen1", 00:29:37.580 "aliases": [ 00:29:37.580 "0ae6051f-3448-4add-be77-ee14655570d3" 00:29:37.580 ], 00:29:37.580 "product_name": "NVMe disk", 00:29:37.581 "block_size": 4096, 00:29:37.581 "num_blocks": 1310720, 00:29:37.581 "uuid": "0ae6051f-3448-4add-be77-ee14655570d3", 00:29:37.581 "numa_id": -1, 00:29:37.581 "assigned_rate_limits": { 00:29:37.581 "rw_ios_per_sec": 0, 00:29:37.581 "rw_mbytes_per_sec": 0, 00:29:37.581 "r_mbytes_per_sec": 0, 00:29:37.581 "w_mbytes_per_sec": 0 00:29:37.581 }, 00:29:37.581 "claimed": true, 00:29:37.581 "claim_type": "read_many_write_one", 00:29:37.581 "zoned": false, 00:29:37.581 "supported_io_types": { 00:29:37.581 "read": true, 00:29:37.581 "write": true, 00:29:37.581 "unmap": true, 00:29:37.581 "flush": true, 00:29:37.581 "reset": true, 00:29:37.581 "nvme_admin": true, 00:29:37.581 "nvme_io": true, 00:29:37.581 "nvme_io_md": false, 00:29:37.581 "write_zeroes": true, 00:29:37.581 "zcopy": false, 00:29:37.581 "get_zone_info": false, 00:29:37.581 "zone_management": false, 00:29:37.581 "zone_append": false, 00:29:37.581 "compare": true, 00:29:37.581 "compare_and_write": false, 00:29:37.581 "abort": true, 00:29:37.581 "seek_hole": false, 00:29:37.581 "seek_data": false, 00:29:37.581 "copy": true, 00:29:37.581 "nvme_iov_md": false 00:29:37.581 }, 00:29:37.581 "driver_specific": { 00:29:37.581 "nvme": [ 00:29:37.581 { 00:29:37.581 "pci_address": "0000:00:11.0", 00:29:37.581 "trid": { 00:29:37.581 "trtype": "PCIe", 00:29:37.581 "traddr": "0000:00:11.0" 00:29:37.581 }, 00:29:37.581 "ctrlr_data": { 00:29:37.581 "cntlid": 0, 00:29:37.581 "vendor_id": "0x1b36", 00:29:37.581 "model_number": "QEMU NVMe Ctrl", 00:29:37.581 "serial_number": "12341", 00:29:37.581 "firmware_revision": "8.0.0", 00:29:37.581 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:37.581 "oacs": { 00:29:37.581 "security": 0, 00:29:37.581 "format": 1, 00:29:37.581 "firmware": 0, 00:29:37.581 "ns_manage": 1 00:29:37.581 }, 00:29:37.581 "multi_ctrlr": false, 00:29:37.581 "ana_reporting": false 00:29:37.581 }, 00:29:37.581 "vs": { 00:29:37.581 "nvme_version": "1.4" 00:29:37.581 }, 00:29:37.581 "ns_data": { 00:29:37.581 "id": 1, 00:29:37.581 "can_share": false 00:29:37.581 } 00:29:37.581 } 00:29:37.581 ], 00:29:37.581 "mp_policy": "active_passive" 00:29:37.581 } 00:29:37.581 } 00:29:37.581 ]' 00:29:37.581 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=29125aa1-3373-444f-af12-bdfc51a7eb82 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:37.843 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29125aa1-3373-444f-af12-bdfc51a7eb82 00:29:38.104 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:38.366 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=972f0e6f-a741-4658-a819-2b99d284808c 00:29:38.366 05:15:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 972f0e6f-a741-4658-a819-2b99d284808c 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ae8caeae-3e96-4b6e-b585-df5a77e7aea4 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ae8caeae-3e96-4b6e-b585-df5a77e7aea4 ]] 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ae8caeae-3e96-4b6e-b585-df5a77e7aea4 5120 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ae8caeae-3e96-4b6e-b585-df5a77e7aea4 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ae8caeae-3e96-4b6e-b585-df5a77e7aea4 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ae8caeae-3e96-4b6e-b585-df5a77e7aea4 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:38.628 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae8caeae-3e96-4b6e-b585-df5a77e7aea4 00:29:38.888 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:38.888 { 00:29:38.888 "name": "ae8caeae-3e96-4b6e-b585-df5a77e7aea4", 00:29:38.888 "aliases": [ 00:29:38.888 "lvs/basen1p0" 00:29:38.888 ], 00:29:38.888 "product_name": "Logical Volume", 00:29:38.888 "block_size": 4096, 00:29:38.888 "num_blocks": 5242880, 00:29:38.888 "uuid": "ae8caeae-3e96-4b6e-b585-df5a77e7aea4", 00:29:38.888 "assigned_rate_limits": { 00:29:38.888 "rw_ios_per_sec": 0, 00:29:38.888 "rw_mbytes_per_sec": 0, 00:29:38.888 "r_mbytes_per_sec": 0, 00:29:38.888 "w_mbytes_per_sec": 0 00:29:38.888 }, 00:29:38.888 "claimed": false, 00:29:38.888 "zoned": false, 00:29:38.888 "supported_io_types": { 00:29:38.888 "read": true, 00:29:38.888 "write": true, 00:29:38.888 "unmap": true, 00:29:38.888 "flush": false, 00:29:38.888 "reset": true, 00:29:38.888 "nvme_admin": false, 00:29:38.888 "nvme_io": false, 00:29:38.888 "nvme_io_md": false, 00:29:38.888 "write_zeroes": true, 00:29:38.888 "zcopy": false, 00:29:38.888 "get_zone_info": false, 00:29:38.888 "zone_management": false, 00:29:38.888 "zone_append": false, 00:29:38.888 "compare": false, 00:29:38.888 "compare_and_write": false, 00:29:38.888 "abort": false, 00:29:38.888 "seek_hole": true, 00:29:38.888 "seek_data": true, 00:29:38.888 "copy": false, 00:29:38.889 "nvme_iov_md": false 00:29:38.889 }, 00:29:38.889 "driver_specific": { 00:29:38.889 "lvol": { 00:29:38.889 "lvol_store_uuid": "972f0e6f-a741-4658-a819-2b99d284808c", 00:29:38.889 "base_bdev": "basen1", 00:29:38.889 "thin_provision": true, 00:29:38.889 "num_allocated_clusters": 0, 00:29:38.889 "snapshot": false, 00:29:38.889 "clone": false, 00:29:38.889 "esnap_clone": false 00:29:38.889 } 00:29:38.889 } 00:29:38.889 } 00:29:38.889 ]' 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:38.889 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:39.149 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:39.149 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:39.149 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:39.408 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:39.408 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:39.408 05:15:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ae8caeae-3e96-4b6e-b585-df5a77e7aea4 -c cachen1p0 --l2p_dram_limit 2 00:29:39.669 [2024-11-21 05:15:56.194159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.194209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:39.669 [2024-11-21 05:15:56.194225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:39.669 [2024-11-21 05:15:56.194237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.194296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.194306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:39.669 [2024-11-21 05:15:56.194314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:39.669 [2024-11-21 05:15:56.194326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.194342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:39.669 [2024-11-21 05:15:56.194596] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:39.669 [2024-11-21 05:15:56.194625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.194633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:39.669 [2024-11-21 05:15:56.194640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.287 ms 00:29:39.669 [2024-11-21 05:15:56.194649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.194679] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 8e7eeef6-f3e3-4bfc-9e6d-1a099f7fd873 00:29:39.669 [2024-11-21 05:15:56.195973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.196002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:39.669 [2024-11-21 05:15:56.196014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:39.669 [2024-11-21 05:15:56.196021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.202926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.203051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:39.669 [2024-11-21 05:15:56.203069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.838 ms 00:29:39.669 [2024-11-21 05:15:56.203076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.203117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.203130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:39.669 [2024-11-21 05:15:56.203139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:39.669 [2024-11-21 05:15:56.203145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.203188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.203196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:39.669 [2024-11-21 05:15:56.203206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:39.669 [2024-11-21 05:15:56.203212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.203235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:39.669 [2024-11-21 05:15:56.204907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.204936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:39.669 [2024-11-21 05:15:56.204944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.681 ms 00:29:39.669 [2024-11-21 05:15:56.204951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.204973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.669 [2024-11-21 05:15:56.204982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:39.669 [2024-11-21 05:15:56.204989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:39.669 [2024-11-21 05:15:56.205002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.669 [2024-11-21 05:15:56.205017] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:39.670 [2024-11-21 05:15:56.205132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:39.670 [2024-11-21 05:15:56.205142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:39.670 [2024-11-21 05:15:56.205151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:39.670 [2024-11-21 05:15:56.205160] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205173] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:39.670 [2024-11-21 05:15:56.205196] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:39.670 [2024-11-21 05:15:56.205202] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:39.670 [2024-11-21 05:15:56.205209] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:39.670 [2024-11-21 05:15:56.205223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.670 [2024-11-21 05:15:56.205234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:39.670 [2024-11-21 05:15:56.205241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:29:39.670 [2024-11-21 05:15:56.205251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.670 [2024-11-21 05:15:56.205315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.670 [2024-11-21 05:15:56.205325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:39.670 [2024-11-21 05:15:56.205331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:39.670 [2024-11-21 05:15:56.205338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.670 [2024-11-21 05:15:56.205412] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:39.670 [2024-11-21 05:15:56.205420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:39.670 [2024-11-21 05:15:56.205427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:39.670 [2024-11-21 05:15:56.205447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:39.670 [2024-11-21 05:15:56.205459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:39.670 [2024-11-21 05:15:56.205464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:39.670 [2024-11-21 05:15:56.205471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:39.670 [2024-11-21 05:15:56.205483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:39.670 [2024-11-21 05:15:56.205488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:39.670 [2024-11-21 05:15:56.205501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:39.670 [2024-11-21 05:15:56.205507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:39.670 [2024-11-21 05:15:56.205519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:39.670 [2024-11-21 05:15:56.205524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:39.670 [2024-11-21 05:15:56.205535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:39.670 [2024-11-21 05:15:56.205542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:39.670 [2024-11-21 05:15:56.205554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:39.670 [2024-11-21 05:15:56.205560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:39.670 [2024-11-21 05:15:56.205573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:39.670 [2024-11-21 05:15:56.205582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:39.670 [2024-11-21 05:15:56.205597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:39.670 [2024-11-21 05:15:56.205603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:39.670 [2024-11-21 05:15:56.205631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:39.670 [2024-11-21 05:15:56.205640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:39.670 [2024-11-21 05:15:56.205654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:39.670 [2024-11-21 05:15:56.205673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:39.670 [2024-11-21 05:15:56.205694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:39.670 [2024-11-21 05:15:56.205700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205707] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:39.670 [2024-11-21 05:15:56.205714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:39.670 [2024-11-21 05:15:56.205724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:39.670 [2024-11-21 05:15:56.205740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:39.670 [2024-11-21 05:15:56.205746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:39.670 [2024-11-21 05:15:56.205753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:39.670 [2024-11-21 05:15:56.205760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:39.670 [2024-11-21 05:15:56.205766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:39.670 [2024-11-21 05:15:56.205771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:39.670 [2024-11-21 05:15:56.205781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:39.670 [2024-11-21 05:15:56.205790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:39.670 [2024-11-21 05:15:56.205808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:39.670 [2024-11-21 05:15:56.205832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:39.670 [2024-11-21 05:15:56.205838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:39.670 [2024-11-21 05:15:56.205846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:39.670 [2024-11-21 05:15:56.205851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:39.670 [2024-11-21 05:15:56.205895] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:39.670 [2024-11-21 05:15:56.205901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:39.670 [2024-11-21 05:15:56.205913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:39.670 [2024-11-21 05:15:56.205920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:39.670 [2024-11-21 05:15:56.205926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:39.670 [2024-11-21 05:15:56.205933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.670 [2024-11-21 05:15:56.205941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:39.670 [2024-11-21 05:15:56.205950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.571 ms 00:29:39.670 [2024-11-21 05:15:56.205955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.670 [2024-11-21 05:15:56.205993] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:39.670 [2024-11-21 05:15:56.206002] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:43.875 [2024-11-21 05:16:00.174684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.174878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:43.875 [2024-11-21 05:16:00.174898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3968.672 ms 00:29:43.875 [2024-11-21 05:16:00.174905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.182772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.182813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:43.875 [2024-11-21 05:16:00.182824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.774 ms 00:29:43.875 [2024-11-21 05:16:00.182830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.182886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.182893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:43.875 [2024-11-21 05:16:00.182905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:43.875 [2024-11-21 05:16:00.182911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.190575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.190754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:43.875 [2024-11-21 05:16:00.190771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.641 ms 00:29:43.875 [2024-11-21 05:16:00.190777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.190804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.190811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:43.875 [2024-11-21 05:16:00.190819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:43.875 [2024-11-21 05:16:00.190824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.191134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.191147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:43.875 [2024-11-21 05:16:00.191157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:29:43.875 [2024-11-21 05:16:00.191167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.191200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.191211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:43.875 [2024-11-21 05:16:00.191221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:43.875 [2024-11-21 05:16:00.191227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.196072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.196099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:43.875 [2024-11-21 05:16:00.196108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.824 ms 00:29:43.875 [2024-11-21 05:16:00.196118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.202739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:43.875 [2024-11-21 05:16:00.203470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.203497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:43.875 [2024-11-21 05:16:00.203504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.304 ms 00:29:43.875 [2024-11-21 05:16:00.203512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.231563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.231600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:43.875 [2024-11-21 05:16:00.231625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.032 ms 00:29:43.875 [2024-11-21 05:16:00.231639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.231705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.231716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:43.875 [2024-11-21 05:16:00.231724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:43.875 [2024-11-21 05:16:00.231732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.234935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.234965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:43.875 [2024-11-21 05:16:00.234973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.189 ms 00:29:43.875 [2024-11-21 05:16:00.234983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.238106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.238235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:43.875 [2024-11-21 05:16:00.238247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.095 ms 00:29:43.875 [2024-11-21 05:16:00.238297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.238518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.238528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:43.875 [2024-11-21 05:16:00.238535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:29:43.875 [2024-11-21 05:16:00.238551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.273417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.273448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:43.875 [2024-11-21 05:16:00.273457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.851 ms 00:29:43.875 [2024-11-21 05:16:00.273469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.277585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.277627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:43.875 [2024-11-21 05:16:00.277635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.079 ms 00:29:43.875 [2024-11-21 05:16:00.277643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.281034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.281060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:43.875 [2024-11-21 05:16:00.281067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.364 ms 00:29:43.875 [2024-11-21 05:16:00.281074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.285043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.285073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:43.875 [2024-11-21 05:16:00.285080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.943 ms 00:29:43.875 [2024-11-21 05:16:00.285089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.285118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.285127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:43.875 [2024-11-21 05:16:00.285134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:43.875 [2024-11-21 05:16:00.285141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.875 [2024-11-21 05:16:00.285191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.875 [2024-11-21 05:16:00.285202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:43.876 [2024-11-21 05:16:00.285208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:43.876 [2024-11-21 05:16:00.285231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.876 [2024-11-21 05:16:00.285880] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4091.405 ms, result 0 00:29:43.876 { 00:29:43.876 "name": "ftl", 00:29:43.876 "uuid": "8e7eeef6-f3e3-4bfc-9e6d-1a099f7fd873" 00:29:43.876 } 00:29:43.876 05:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:43.876 [2024-11-21 05:16:00.452137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.876 05:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:44.138 05:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:44.138 [2024-11-21 05:16:00.860404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:44.400 05:16:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:44.400 [2024-11-21 05:16:01.080737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:44.400 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:44.973 Fill FTL, iteration 1 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94025 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94025 /var/tmp/spdk.tgt.sock 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94025 ']' 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:44.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.973 05:16:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:44.973 [2024-11-21 05:16:01.510435] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:29:44.973 [2024-11-21 05:16:01.510696] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94025 ] 00:29:44.973 [2024-11-21 05:16:01.667684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.973 [2024-11-21 05:16:01.692055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.915 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.915 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:45.915 05:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:45.915 ftln1 00:29:45.915 05:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:45.915 05:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94025 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94025 ']' 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94025 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94025 00:29:46.178 killing process with pid 94025 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94025' 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94025 00:29:46.178 05:16:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94025 00:29:46.439 05:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:46.439 05:16:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:46.701 [2024-11-21 05:16:03.222806] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:29:46.701 [2024-11-21 05:16:03.222938] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94056 ] 00:29:46.701 [2024-11-21 05:16:03.379204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.701 [2024-11-21 05:16:03.403505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.088  [2024-11-21T05:16:05.762Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-21T05:16:06.697Z] Copying: 406/1024 [MB] (212 MBps) [2024-11-21T05:16:07.633Z] Copying: 672/1024 [MB] (266 MBps) [2024-11-21T05:16:08.203Z] Copying: 941/1024 [MB] (269 MBps) [2024-11-21T05:16:08.203Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:29:51.469 00:29:51.469 Calculate MD5 checksum, iteration 1 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:51.469 05:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.469 [2024-11-21 05:16:08.167625] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:29:51.469 [2024-11-21 05:16:08.167766] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94113 ] 00:29:51.728 [2024-11-21 05:16:08.326982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.728 [2024-11-21 05:16:08.361853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.100  [2024-11-21T05:16:10.401Z] Copying: 644/1024 [MB] (644 MBps) [2024-11-21T05:16:10.401Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:29:53.667 00:29:53.667 05:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:53.667 05:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:56.211 Fill FTL, iteration 2 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5ce934df98440cbdd2652de6d4141de7 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:56.211 05:16:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:56.211 [2024-11-21 05:16:12.576742] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:29:56.211 [2024-11-21 05:16:12.576865] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94159 ] 00:29:56.211 [2024-11-21 05:16:12.730048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.211 [2024-11-21 05:16:12.752834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.586  [2024-11-21T05:16:15.254Z] Copying: 271/1024 [MB] (271 MBps) [2024-11-21T05:16:16.206Z] Copying: 539/1024 [MB] (268 MBps) [2024-11-21T05:16:16.820Z] Copying: 809/1024 [MB] (270 MBps) [2024-11-21T05:16:17.080Z] Copying: 1024/1024 [MB] (average 269 MBps) 00:30:00.346 00:30:00.346 Calculate MD5 checksum, iteration 2 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:00.346 05:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:00.346 [2024-11-21 05:16:17.005156] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:00.346 [2024-11-21 05:16:17.005473] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94212 ] 00:30:00.605 [2024-11-21 05:16:17.162185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.605 [2024-11-21 05:16:17.190968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.978  [2024-11-21T05:16:19.281Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-21T05:16:24.574Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:30:07.840 00:30:07.840 05:16:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:07.840 05:16:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:09.220 05:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:09.220 05:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=16e57c257b7de72df0eea28cef15de92 00:30:09.220 05:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:09.220 05:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:09.220 05:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:09.220 [2024-11-21 05:16:25.804307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.220 [2024-11-21 05:16:25.804350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:09.220 [2024-11-21 05:16:25.804362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:09.220 [2024-11-21 05:16:25.804368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.220 [2024-11-21 05:16:25.804396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.220 [2024-11-21 05:16:25.804405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:09.220 [2024-11-21 05:16:25.804411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:09.220 [2024-11-21 05:16:25.804417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.220 [2024-11-21 05:16:25.804432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.220 [2024-11-21 05:16:25.804439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:09.220 [2024-11-21 05:16:25.804445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:09.220 [2024-11-21 05:16:25.804456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.220 [2024-11-21 05:16:25.804505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.186 ms, result 0 00:30:09.220 true 00:30:09.220 05:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:09.481 { 00:30:09.481 "name": "ftl", 00:30:09.481 "properties": [ 00:30:09.481 { 00:30:09.481 "name": "superblock_version", 00:30:09.481 "value": 5, 00:30:09.481 "read-only": true 00:30:09.481 }, 00:30:09.481 { 00:30:09.481 "name": "base_device", 00:30:09.481 "bands": [ 00:30:09.481 { 00:30:09.481 "id": 0, 00:30:09.481 "state": "FREE", 00:30:09.481 "validity": 0.0 00:30:09.481 }, 00:30:09.481 { 00:30:09.481 "id": 1, 00:30:09.481 "state": "FREE", 00:30:09.481 "validity": 0.0 00:30:09.481 }, 00:30:09.481 { 00:30:09.482 "id": 2, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 3, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 4, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 5, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 6, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 7, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 8, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 9, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 10, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 11, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 12, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 13, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 14, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 15, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 16, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 17, 00:30:09.482 "state": "FREE", 00:30:09.482 "validity": 0.0 00:30:09.482 } 00:30:09.482 ], 00:30:09.482 "read-only": true 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "name": "cache_device", 00:30:09.482 "type": "bdev", 00:30:09.482 "chunks": [ 00:30:09.482 { 00:30:09.482 "id": 0, 00:30:09.482 "state": "INACTIVE", 00:30:09.482 "utilization": 0.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 1, 00:30:09.482 "state": "CLOSED", 00:30:09.482 "utilization": 1.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 2, 00:30:09.482 "state": "CLOSED", 00:30:09.482 "utilization": 1.0 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 3, 00:30:09.482 "state": "OPEN", 00:30:09.482 "utilization": 0.001953125 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "id": 4, 00:30:09.482 "state": "OPEN", 00:30:09.482 "utilization": 0.0 00:30:09.482 } 00:30:09.482 ], 00:30:09.482 "read-only": true 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "name": "verbose_mode", 00:30:09.482 "value": true, 00:30:09.482 "unit": "", 00:30:09.482 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:09.482 }, 00:30:09.482 { 00:30:09.482 "name": "prep_upgrade_on_shutdown", 00:30:09.482 "value": false, 00:30:09.482 "unit": "", 00:30:09.482 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:09.482 } 00:30:09.482 ] 00:30:09.482 } 00:30:09.482 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:09.743 [2024-11-21 05:16:26.220593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.743 [2024-11-21 05:16:26.220752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:09.743 [2024-11-21 05:16:26.220766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:09.743 [2024-11-21 05:16:26.220772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.743 [2024-11-21 05:16:26.220793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.743 [2024-11-21 05:16:26.220799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:09.743 [2024-11-21 05:16:26.220805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:09.743 [2024-11-21 05:16:26.220812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.743 [2024-11-21 05:16:26.220827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.743 [2024-11-21 05:16:26.220833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:09.743 [2024-11-21 05:16:26.220839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:09.743 [2024-11-21 05:16:26.220844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.743 [2024-11-21 05:16:26.220886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.279 ms, result 0 00:30:09.743 true 00:30:09.743 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:09.743 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:09.743 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:09.743 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:09.743 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:09.743 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:10.004 [2024-11-21 05:16:26.628928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.004 [2024-11-21 05:16:26.629045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:10.004 [2024-11-21 05:16:26.629058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:10.004 [2024-11-21 05:16:26.629064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.004 [2024-11-21 05:16:26.629084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.004 [2024-11-21 05:16:26.629090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:10.004 [2024-11-21 05:16:26.629096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:10.004 [2024-11-21 05:16:26.629102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.004 [2024-11-21 05:16:26.629116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.004 [2024-11-21 05:16:26.629122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:10.004 [2024-11-21 05:16:26.629127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:10.004 [2024-11-21 05:16:26.629133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.004 [2024-11-21 05:16:26.629173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.233 ms, result 0 00:30:10.004 true 00:30:10.004 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:10.266 { 00:30:10.266 "name": "ftl", 00:30:10.266 "properties": [ 00:30:10.266 { 00:30:10.266 "name": "superblock_version", 00:30:10.266 "value": 5, 00:30:10.266 "read-only": true 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "name": "base_device", 00:30:10.266 "bands": [ 00:30:10.266 { 00:30:10.266 "id": 0, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 1, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 2, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 3, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 4, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 5, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 6, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 7, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 8, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 9, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 10, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 11, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 12, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 13, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 14, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 15, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 16, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 17, 00:30:10.266 "state": "FREE", 00:30:10.266 "validity": 0.0 00:30:10.266 } 00:30:10.266 ], 00:30:10.266 "read-only": true 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "name": "cache_device", 00:30:10.266 "type": "bdev", 00:30:10.266 "chunks": [ 00:30:10.266 { 00:30:10.266 "id": 0, 00:30:10.266 "state": "INACTIVE", 00:30:10.266 "utilization": 0.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 1, 00:30:10.266 "state": "CLOSED", 00:30:10.266 "utilization": 1.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 2, 00:30:10.266 "state": "CLOSED", 00:30:10.266 "utilization": 1.0 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 3, 00:30:10.266 "state": "OPEN", 00:30:10.266 "utilization": 0.001953125 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "id": 4, 00:30:10.266 "state": "OPEN", 00:30:10.266 "utilization": 0.0 00:30:10.266 } 00:30:10.266 ], 00:30:10.266 "read-only": true 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "name": "verbose_mode", 00:30:10.266 "value": true, 00:30:10.266 "unit": "", 00:30:10.266 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:10.266 }, 00:30:10.266 { 00:30:10.266 "name": "prep_upgrade_on_shutdown", 00:30:10.266 "value": true, 00:30:10.266 "unit": "", 00:30:10.266 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:10.266 } 00:30:10.266 ] 00:30:10.266 } 00:30:10.266 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:10.266 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93902 ]] 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93902 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93902 ']' 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93902 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93902 00:30:10.267 killing process with pid 93902 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93902' 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93902 00:30:10.267 05:16:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93902 00:30:10.267 [2024-11-21 05:16:26.978138] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:10.267 [2024-11-21 05:16:26.983904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.267 [2024-11-21 05:16:26.983935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:10.267 [2024-11-21 05:16:26.983945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:10.267 [2024-11-21 05:16:26.983951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:10.267 [2024-11-21 05:16:26.983968] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:10.267 [2024-11-21 05:16:26.984347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:10.267 [2024-11-21 05:16:26.984361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:10.267 [2024-11-21 05:16:26.984371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.369 ms 00:30:10.267 [2024-11-21 05:16:26.984377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.308345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.308452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:20.280 [2024-11-21 05:16:36.308472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9323.906 ms 00:30:20.280 [2024-11-21 05:16:36.308482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.310240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.310283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:20.280 [2024-11-21 05:16:36.310296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.739 ms 00:30:20.280 [2024-11-21 05:16:36.310306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.311484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.311506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:20.280 [2024-11-21 05:16:36.311525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.133 ms 00:30:20.280 [2024-11-21 05:16:36.311535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.315382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.315438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:20.280 [2024-11-21 05:16:36.315452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.803 ms 00:30:20.280 [2024-11-21 05:16:36.315463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.319465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.319694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:20.280 [2024-11-21 05:16:36.319718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.952 ms 00:30:20.280 [2024-11-21 05:16:36.319738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.319864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.319877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:20.280 [2024-11-21 05:16:36.319887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:30:20.280 [2024-11-21 05:16:36.319896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.322721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.322897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:20.280 [2024-11-21 05:16:36.322916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.796 ms 00:30:20.280 [2024-11-21 05:16:36.322924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.325630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.325674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:20.280 [2024-11-21 05:16:36.325685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.629 ms 00:30:20.280 [2024-11-21 05:16:36.325693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.328007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.328059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:20.280 [2024-11-21 05:16:36.328069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.267 ms 00:30:20.280 [2024-11-21 05:16:36.328076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.330281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.280 [2024-11-21 05:16:36.330330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:20.280 [2024-11-21 05:16:36.330341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.122 ms 00:30:20.280 [2024-11-21 05:16:36.330347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.280 [2024-11-21 05:16:36.330392] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:20.280 [2024-11-21 05:16:36.330409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:20.280 [2024-11-21 05:16:36.330431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:20.280 [2024-11-21 05:16:36.330440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:20.280 [2024-11-21 05:16:36.330450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:20.280 [2024-11-21 05:16:36.330578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:20.280 [2024-11-21 05:16:36.330587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8e7eeef6-f3e3-4bfc-9e6d-1a099f7fd873 00:30:20.280 [2024-11-21 05:16:36.330595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:20.280 [2024-11-21 05:16:36.330603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:20.280 [2024-11-21 05:16:36.330635] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:20.280 [2024-11-21 05:16:36.330649] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:20.280 [2024-11-21 05:16:36.330658] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:20.280 [2024-11-21 05:16:36.330667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:20.281 [2024-11-21 05:16:36.330675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:20.281 [2024-11-21 05:16:36.330684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:20.281 [2024-11-21 05:16:36.330691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:20.281 [2024-11-21 05:16:36.330701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.281 [2024-11-21 05:16:36.330710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:20.281 [2024-11-21 05:16:36.330720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:30:20.281 [2024-11-21 05:16:36.330741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.334033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.281 [2024-11-21 05:16:36.334089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:20.281 [2024-11-21 05:16:36.334102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.268 ms 00:30:20.281 [2024-11-21 05:16:36.334111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.334270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.281 [2024-11-21 05:16:36.334280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:20.281 [2024-11-21 05:16:36.334290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.135 ms 00:30:20.281 [2024-11-21 05:16:36.334298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.345662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.345716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:20.281 [2024-11-21 05:16:36.345729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.345738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.345781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.345792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:20.281 [2024-11-21 05:16:36.345802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.345812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.345907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.345923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:20.281 [2024-11-21 05:16:36.345934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.345944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.345964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.345973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:20.281 [2024-11-21 05:16:36.345986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.345998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.367277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.367351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:20.281 [2024-11-21 05:16:36.367364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.367374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.383830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.384085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:20.281 [2024-11-21 05:16:36.384107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.384117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.384223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.384236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:20.281 [2024-11-21 05:16:36.384257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.384266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.384347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.384359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:20.281 [2024-11-21 05:16:36.384369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.384378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.384471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.384482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:20.281 [2024-11-21 05:16:36.384492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.384504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.384549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.384561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:20.281 [2024-11-21 05:16:36.384572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.384582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.384665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.384677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:20.281 [2024-11-21 05:16:36.384688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.384701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.384766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:20.281 [2024-11-21 05:16:36.384779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:20.281 [2024-11-21 05:16:36.384789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:20.281 [2024-11-21 05:16:36.384800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.281 [2024-11-21 05:16:36.384980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9400.975 ms, result 0 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94432 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94432 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94432 ']' 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.543 05:16:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:20.543 [2024-11-21 05:16:37.181856] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:20.543 [2024-11-21 05:16:37.182154] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94432 ] 00:30:20.804 [2024-11-21 05:16:37.339880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.804 [2024-11-21 05:16:37.372418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.066 [2024-11-21 05:16:37.797750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:21.066 [2024-11-21 05:16:37.798125] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:21.328 [2024-11-21 05:16:37.946288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.328 [2024-11-21 05:16:37.946338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:21.328 [2024-11-21 05:16:37.946355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:21.328 [2024-11-21 05:16:37.946364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.328 [2024-11-21 05:16:37.946420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.328 [2024-11-21 05:16:37.946430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:21.328 [2024-11-21 05:16:37.946440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:21.328 [2024-11-21 05:16:37.946451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.328 [2024-11-21 05:16:37.946476] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:21.328 [2024-11-21 05:16:37.946738] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:21.328 [2024-11-21 05:16:37.946777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.328 [2024-11-21 05:16:37.946786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:21.329 [2024-11-21 05:16:37.946794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:30:21.329 [2024-11-21 05:16:37.946802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.948196] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:21.329 [2024-11-21 05:16:37.951491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.951530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:21.329 [2024-11-21 05:16:37.951547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.297 ms 00:30:21.329 [2024-11-21 05:16:37.951554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.951629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.951640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:21.329 [2024-11-21 05:16:37.951649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:21.329 [2024-11-21 05:16:37.951657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.958622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.958650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:21.329 [2024-11-21 05:16:37.958659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.920 ms 00:30:21.329 [2024-11-21 05:16:37.958667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.958716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.958725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:21.329 [2024-11-21 05:16:37.958733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:21.329 [2024-11-21 05:16:37.958740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.958780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.958792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:21.329 [2024-11-21 05:16:37.958801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:21.329 [2024-11-21 05:16:37.958809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.958833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:21.329 [2024-11-21 05:16:37.960598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.960636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:21.329 [2024-11-21 05:16:37.960645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.772 ms 00:30:21.329 [2024-11-21 05:16:37.960653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.960683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.960696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:21.329 [2024-11-21 05:16:37.960704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:21.329 [2024-11-21 05:16:37.960715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.960751] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:21.329 [2024-11-21 05:16:37.960772] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:21.329 [2024-11-21 05:16:37.960810] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:21.329 [2024-11-21 05:16:37.960828] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:21.329 [2024-11-21 05:16:37.960943] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:21.329 [2024-11-21 05:16:37.960954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:21.329 [2024-11-21 05:16:37.960966] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:21.329 [2024-11-21 05:16:37.960977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:21.329 [2024-11-21 05:16:37.960986] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:21.329 [2024-11-21 05:16:37.960994] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:21.329 [2024-11-21 05:16:37.961006] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:21.329 [2024-11-21 05:16:37.961013] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:21.329 [2024-11-21 05:16:37.961021] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:21.329 [2024-11-21 05:16:37.961029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.961036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:21.329 [2024-11-21 05:16:37.961046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.281 ms 00:30:21.329 [2024-11-21 05:16:37.961054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.961138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.329 [2024-11-21 05:16:37.961146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:21.329 [2024-11-21 05:16:37.961154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:21.329 [2024-11-21 05:16:37.961163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.329 [2024-11-21 05:16:37.961273] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:21.329 [2024-11-21 05:16:37.961288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:21.329 [2024-11-21 05:16:37.961298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:21.329 [2024-11-21 05:16:37.961308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:21.329 [2024-11-21 05:16:37.961326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:21.329 [2024-11-21 05:16:37.961342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:21.329 [2024-11-21 05:16:37.961350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:21.329 [2024-11-21 05:16:37.961358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:21.329 [2024-11-21 05:16:37.961373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:21.329 [2024-11-21 05:16:37.961381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:21.329 [2024-11-21 05:16:37.961396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:21.329 [2024-11-21 05:16:37.961415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:21.329 [2024-11-21 05:16:37.961430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:21.329 [2024-11-21 05:16:37.961437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:21.329 [2024-11-21 05:16:37.961453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:21.329 [2024-11-21 05:16:37.961461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:21.329 [2024-11-21 05:16:37.961471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:21.329 [2024-11-21 05:16:37.961479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:21.329 [2024-11-21 05:16:37.961487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:21.329 [2024-11-21 05:16:37.961495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:21.329 [2024-11-21 05:16:37.961502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:21.329 [2024-11-21 05:16:37.961510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:21.329 [2024-11-21 05:16:37.961517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:21.329 [2024-11-21 05:16:37.961525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:21.329 [2024-11-21 05:16:37.961532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:21.329 [2024-11-21 05:16:37.961542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:21.329 [2024-11-21 05:16:37.961550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:21.329 [2024-11-21 05:16:37.961558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:21.329 [2024-11-21 05:16:37.961573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:21.329 [2024-11-21 05:16:37.961581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.329 [2024-11-21 05:16:37.961588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:21.329 [2024-11-21 05:16:37.961596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:21.330 [2024-11-21 05:16:37.961603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.330 [2024-11-21 05:16:37.961630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:21.330 [2024-11-21 05:16:37.961637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:21.330 [2024-11-21 05:16:37.961644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.330 [2024-11-21 05:16:37.961650] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:21.330 [2024-11-21 05:16:37.961658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:21.330 [2024-11-21 05:16:37.961666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:21.330 [2024-11-21 05:16:37.961673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:21.330 [2024-11-21 05:16:37.961683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:21.330 [2024-11-21 05:16:37.961691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:21.330 [2024-11-21 05:16:37.961697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:21.330 [2024-11-21 05:16:37.961704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:21.330 [2024-11-21 05:16:37.961710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:21.330 [2024-11-21 05:16:37.961717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:21.330 [2024-11-21 05:16:37.961726] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:21.330 [2024-11-21 05:16:37.961756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:21.330 [2024-11-21 05:16:37.961773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:21.330 [2024-11-21 05:16:37.961796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:21.330 [2024-11-21 05:16:37.961803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:21.330 [2024-11-21 05:16:37.961810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:21.330 [2024-11-21 05:16:37.961818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:21.330 [2024-11-21 05:16:37.961871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:21.330 [2024-11-21 05:16:37.961879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:21.330 [2024-11-21 05:16:37.961896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:21.330 [2024-11-21 05:16:37.961903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:21.330 [2024-11-21 05:16:37.961911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:21.330 [2024-11-21 05:16:37.961918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.330 [2024-11-21 05:16:37.961928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:21.330 [2024-11-21 05:16:37.961942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.723 ms 00:30:21.330 [2024-11-21 05:16:37.961949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.330 [2024-11-21 05:16:37.961995] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:21.330 [2024-11-21 05:16:37.962005] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:25.540 [2024-11-21 05:16:42.207192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.540 [2024-11-21 05:16:42.207283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:25.540 [2024-11-21 05:16:42.207301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4245.181 ms 00:30:25.540 [2024-11-21 05:16:42.207311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.540 [2024-11-21 05:16:42.221364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.540 [2024-11-21 05:16:42.221649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:25.540 [2024-11-21 05:16:42.221673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.919 ms 00:30:25.540 [2024-11-21 05:16:42.221683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.221758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.221770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:25.541 [2024-11-21 05:16:42.221791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:25.541 [2024-11-21 05:16:42.221800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.234267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.234321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:25.541 [2024-11-21 05:16:42.234334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.412 ms 00:30:25.541 [2024-11-21 05:16:42.234343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.234386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.234396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:25.541 [2024-11-21 05:16:42.234405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:25.541 [2024-11-21 05:16:42.234417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.235001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.235028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:25.541 [2024-11-21 05:16:42.235050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:30:25.541 [2024-11-21 05:16:42.235060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.235120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.235142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:25.541 [2024-11-21 05:16:42.235153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:25.541 [2024-11-21 05:16:42.235163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.243796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.243841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:25.541 [2024-11-21 05:16:42.243853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.602 ms 00:30:25.541 [2024-11-21 05:16:42.243862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.247787] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:25.541 [2024-11-21 05:16:42.247841] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:25.541 [2024-11-21 05:16:42.247856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.247864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:25.541 [2024-11-21 05:16:42.247874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.871 ms 00:30:25.541 [2024-11-21 05:16:42.247883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.253066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.253333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:25.541 [2024-11-21 05:16:42.253354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.127 ms 00:30:25.541 [2024-11-21 05:16:42.253364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.256441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.256651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:25.541 [2024-11-21 05:16:42.256670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.989 ms 00:30:25.541 [2024-11-21 05:16:42.256678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.259427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.259478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:25.541 [2024-11-21 05:16:42.259488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.653 ms 00:30:25.541 [2024-11-21 05:16:42.259497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.541 [2024-11-21 05:16:42.259862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.541 [2024-11-21 05:16:42.259877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:25.541 [2024-11-21 05:16:42.259886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:30:25.541 [2024-11-21 05:16:42.259894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.294969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.295038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:25.804 [2024-11-21 05:16:42.295053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.053 ms 00:30:25.804 [2024-11-21 05:16:42.295063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.303296] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:25.804 [2024-11-21 05:16:42.304157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.304204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:25.804 [2024-11-21 05:16:42.304217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.036 ms 00:30:25.804 [2024-11-21 05:16:42.304226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.304307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.304318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:25.804 [2024-11-21 05:16:42.304328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:25.804 [2024-11-21 05:16:42.304336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.304385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.304397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:25.804 [2024-11-21 05:16:42.304410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:25.804 [2024-11-21 05:16:42.304419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.304443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.304452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:25.804 [2024-11-21 05:16:42.304461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:25.804 [2024-11-21 05:16:42.304469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.304509] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:25.804 [2024-11-21 05:16:42.304529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.304537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:25.804 [2024-11-21 05:16:42.304546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:25.804 [2024-11-21 05:16:42.304557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.309963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.310012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:25.804 [2024-11-21 05:16:42.310031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.384 ms 00:30:25.804 [2024-11-21 05:16:42.310040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.310131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.804 [2024-11-21 05:16:42.310141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:25.804 [2024-11-21 05:16:42.310150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:30:25.804 [2024-11-21 05:16:42.310158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.804 [2024-11-21 05:16:42.311273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4364.520 ms, result 0 00:30:25.804 [2024-11-21 05:16:42.324594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.804 [2024-11-21 05:16:42.340597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:25.804 [2024-11-21 05:16:42.348746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:25.804 05:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.804 05:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:25.804 05:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:25.804 05:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:25.804 05:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:26.065 [2024-11-21 05:16:42.600794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.065 [2024-11-21 05:16:42.600849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:26.065 [2024-11-21 05:16:42.600862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:26.066 [2024-11-21 05:16:42.600871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.066 [2024-11-21 05:16:42.600896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.066 [2024-11-21 05:16:42.600906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:26.066 [2024-11-21 05:16:42.600915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:26.066 [2024-11-21 05:16:42.600926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.066 [2024-11-21 05:16:42.600947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.066 [2024-11-21 05:16:42.600956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:26.066 [2024-11-21 05:16:42.600965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:26.066 [2024-11-21 05:16:42.600973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.066 [2024-11-21 05:16:42.601033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.244 ms, result 0 00:30:26.066 true 00:30:26.066 05:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.327 { 00:30:26.327 "name": "ftl", 00:30:26.328 "properties": [ 00:30:26.328 { 00:30:26.328 "name": "superblock_version", 00:30:26.328 "value": 5, 00:30:26.328 "read-only": true 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "name": "base_device", 00:30:26.328 "bands": [ 00:30:26.328 { 00:30:26.328 "id": 0, 00:30:26.328 "state": "CLOSED", 00:30:26.328 "validity": 1.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 1, 00:30:26.328 "state": "CLOSED", 00:30:26.328 "validity": 1.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 2, 00:30:26.328 "state": "CLOSED", 00:30:26.328 "validity": 0.007843137254901933 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 3, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 4, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 5, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 6, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 7, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 8, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 9, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 10, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 11, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 12, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 13, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 14, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 15, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 16, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 17, 00:30:26.328 "state": "FREE", 00:30:26.328 "validity": 0.0 00:30:26.328 } 00:30:26.328 ], 00:30:26.328 "read-only": true 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "name": "cache_device", 00:30:26.328 "type": "bdev", 00:30:26.328 "chunks": [ 00:30:26.328 { 00:30:26.328 "id": 0, 00:30:26.328 "state": "INACTIVE", 00:30:26.328 "utilization": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 1, 00:30:26.328 "state": "OPEN", 00:30:26.328 "utilization": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 2, 00:30:26.328 "state": "OPEN", 00:30:26.328 "utilization": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 3, 00:30:26.328 "state": "FREE", 00:30:26.328 "utilization": 0.0 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "id": 4, 00:30:26.328 "state": "FREE", 00:30:26.328 "utilization": 0.0 00:30:26.328 } 00:30:26.328 ], 00:30:26.328 "read-only": true 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "name": "verbose_mode", 00:30:26.328 "value": true, 00:30:26.328 "unit": "", 00:30:26.328 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "name": "prep_upgrade_on_shutdown", 00:30:26.328 "value": false, 00:30:26.328 "unit": "", 00:30:26.328 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:26.328 } 00:30:26.328 ] 00:30:26.328 } 00:30:26.328 05:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:26.328 05:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.328 05:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.591 Validate MD5 checksum, iteration 1 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:26.591 05:16:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:26.852 [2024-11-21 05:16:43.369744] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:26.852 [2024-11-21 05:16:43.370158] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94512 ] 00:30:26.852 [2024-11-21 05:16:43.532976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.852 [2024-11-21 05:16:43.573909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.770  [2024-11-21T05:16:46.078Z] Copying: 615/1024 [MB] (615 MBps) [2024-11-21T05:16:47.023Z] Copying: 1024/1024 [MB] (average 543 MBps) 00:30:30.289 00:30:30.289 05:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:30.289 05:16:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:32.838 Validate MD5 checksum, iteration 2 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5ce934df98440cbdd2652de6d4141de7 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5ce934df98440cbdd2652de6d4141de7 != \5\c\e\9\3\4\d\f\9\8\4\4\0\c\b\d\d\2\6\5\2\d\e\6\d\4\1\4\1\d\e\7 ]] 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:32.838 05:16:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:32.838 [2024-11-21 05:16:49.077874] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:32.838 [2024-11-21 05:16:49.078102] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94575 ] 00:30:32.839 [2024-11-21 05:16:49.236255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.839 [2024-11-21 05:16:49.260376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.229  [2024-11-21T05:16:51.533Z] Copying: 596/1024 [MB] (596 MBps) [2024-11-21T05:16:52.103Z] Copying: 1024/1024 [MB] (average 564 MBps) 00:30:35.369 00:30:35.369 05:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:35.369 05:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=16e57c257b7de72df0eea28cef15de92 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 16e57c257b7de72df0eea28cef15de92 != \1\6\e\5\7\c\2\5\7\b\7\d\e\7\2\d\f\0\e\e\a\2\8\c\e\f\1\5\d\e\9\2 ]] 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94432 ]] 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94432 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94636 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94636 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94636 ']' 00:30:37.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:37.282 05:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:37.542 [2024-11-21 05:16:54.050602] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:37.542 [2024-11-21 05:16:54.050706] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94636 ] 00:30:37.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94432 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:37.542 [2024-11-21 05:16:54.197363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.542 [2024-11-21 05:16:54.220466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.801 [2024-11-21 05:16:54.511228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:37.801 [2024-11-21 05:16:54.511436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:38.063 [2024-11-21 05:16:54.649469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.649504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:38.063 [2024-11-21 05:16:54.649518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:38.063 [2024-11-21 05:16:54.649526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.649569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.649577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:38.063 [2024-11-21 05:16:54.649588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:38.063 [2024-11-21 05:16:54.649596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.649627] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:38.063 [2024-11-21 05:16:54.649818] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:38.063 [2024-11-21 05:16:54.649831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.649838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:38.063 [2024-11-21 05:16:54.649844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:30:38.063 [2024-11-21 05:16:54.649849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.650088] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:38.063 [2024-11-21 05:16:54.654138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.654171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:38.063 [2024-11-21 05:16:54.654184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.053 ms 00:30:38.063 [2024-11-21 05:16:54.654191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.655098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.655123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:38.063 [2024-11-21 05:16:54.655131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:38.063 [2024-11-21 05:16:54.655137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.655354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.655363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:38.063 [2024-11-21 05:16:54.655370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:30:38.063 [2024-11-21 05:16:54.655376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.655406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.655413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:38.063 [2024-11-21 05:16:54.655419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:38.063 [2024-11-21 05:16:54.655425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.655445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.655451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:38.063 [2024-11-21 05:16:54.655459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:38.063 [2024-11-21 05:16:54.655468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.655487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:38.063 [2024-11-21 05:16:54.656249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.656262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:38.063 [2024-11-21 05:16:54.656269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.769 ms 00:30:38.063 [2024-11-21 05:16:54.656275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.656294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.656303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:38.063 [2024-11-21 05:16:54.656309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:38.063 [2024-11-21 05:16:54.656315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.656331] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:38.063 [2024-11-21 05:16:54.656347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:38.063 [2024-11-21 05:16:54.656379] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:38.063 [2024-11-21 05:16:54.656391] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:38.063 [2024-11-21 05:16:54.656475] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:38.063 [2024-11-21 05:16:54.656484] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:38.063 [2024-11-21 05:16:54.656493] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:38.063 [2024-11-21 05:16:54.656501] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:38.063 [2024-11-21 05:16:54.656511] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:38.063 [2024-11-21 05:16:54.656517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:38.063 [2024-11-21 05:16:54.656522] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:38.063 [2024-11-21 05:16:54.656528] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:38.063 [2024-11-21 05:16:54.656537] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:38.063 [2024-11-21 05:16:54.656542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.656554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:38.063 [2024-11-21 05:16:54.656562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:30:38.063 [2024-11-21 05:16:54.656567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.656643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.063 [2024-11-21 05:16:54.656650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:38.063 [2024-11-21 05:16:54.656659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:30:38.063 [2024-11-21 05:16:54.656666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.063 [2024-11-21 05:16:54.656750] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:38.063 [2024-11-21 05:16:54.656758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:38.064 [2024-11-21 05:16:54.656765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:38.064 [2024-11-21 05:16:54.656773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:38.064 [2024-11-21 05:16:54.656785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:38.064 [2024-11-21 05:16:54.656796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:38.064 [2024-11-21 05:16:54.656801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:38.064 [2024-11-21 05:16:54.656806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:38.064 [2024-11-21 05:16:54.656817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:38.064 [2024-11-21 05:16:54.656824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:38.064 [2024-11-21 05:16:54.656844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:38.064 [2024-11-21 05:16:54.656849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:38.064 [2024-11-21 05:16:54.656859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:38.064 [2024-11-21 05:16:54.656865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:38.064 [2024-11-21 05:16:54.656875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:38.064 [2024-11-21 05:16:54.656880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.064 [2024-11-21 05:16:54.656885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:38.064 [2024-11-21 05:16:54.656891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:38.064 [2024-11-21 05:16:54.656896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.064 [2024-11-21 05:16:54.656902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:38.064 [2024-11-21 05:16:54.656908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:38.064 [2024-11-21 05:16:54.656914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.064 [2024-11-21 05:16:54.656921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:38.064 [2024-11-21 05:16:54.656927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:38.064 [2024-11-21 05:16:54.656935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.064 [2024-11-21 05:16:54.656941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:38.064 [2024-11-21 05:16:54.656947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:38.064 [2024-11-21 05:16:54.656953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:38.064 [2024-11-21 05:16:54.656965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:38.064 [2024-11-21 05:16:54.656970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:38.064 [2024-11-21 05:16:54.656983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.656994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:38.064 [2024-11-21 05:16:54.657000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:38.064 [2024-11-21 05:16:54.657005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.657011] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:38.064 [2024-11-21 05:16:54.657019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:38.064 [2024-11-21 05:16:54.657025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:38.064 [2024-11-21 05:16:54.657034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.064 [2024-11-21 05:16:54.657040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:38.064 [2024-11-21 05:16:54.657047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:38.064 [2024-11-21 05:16:54.657052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:38.064 [2024-11-21 05:16:54.657058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:38.064 [2024-11-21 05:16:54.657064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:38.064 [2024-11-21 05:16:54.657070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:38.064 [2024-11-21 05:16:54.657078] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:38.064 [2024-11-21 05:16:54.657088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:38.064 [2024-11-21 05:16:54.657101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:38.064 [2024-11-21 05:16:54.657120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:38.064 [2024-11-21 05:16:54.657126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:38.064 [2024-11-21 05:16:54.657133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:38.064 [2024-11-21 05:16:54.657140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:38.064 [2024-11-21 05:16:54.657185] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:38.064 [2024-11-21 05:16:54.657195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:38.064 [2024-11-21 05:16:54.657208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:38.064 [2024-11-21 05:16:54.657214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:38.064 [2024-11-21 05:16:54.657237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:38.064 [2024-11-21 05:16:54.657244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.064 [2024-11-21 05:16:54.657258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:38.064 [2024-11-21 05:16:54.657266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:30:38.064 [2024-11-21 05:16:54.657277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.064 [2024-11-21 05:16:54.665749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.064 [2024-11-21 05:16:54.665775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:38.064 [2024-11-21 05:16:54.665785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.433 ms 00:30:38.064 [2024-11-21 05:16:54.665791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.064 [2024-11-21 05:16:54.665823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.064 [2024-11-21 05:16:54.665829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:38.064 [2024-11-21 05:16:54.665836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:38.064 [2024-11-21 05:16:54.665843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.064 [2024-11-21 05:16:54.675674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.064 [2024-11-21 05:16:54.675699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:38.064 [2024-11-21 05:16:54.675707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.796 ms 00:30:38.064 [2024-11-21 05:16:54.675714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.064 [2024-11-21 05:16:54.675735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.064 [2024-11-21 05:16:54.675741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:38.064 [2024-11-21 05:16:54.675750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:38.064 [2024-11-21 05:16:54.675756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.064 [2024-11-21 05:16:54.675830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.064 [2024-11-21 05:16:54.675839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:38.064 [2024-11-21 05:16:54.675846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:38.064 [2024-11-21 05:16:54.675854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.064 [2024-11-21 05:16:54.675887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.064 [2024-11-21 05:16:54.675893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:38.064 [2024-11-21 05:16:54.675899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:38.064 [2024-11-21 05:16:54.675907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.064 [2024-11-21 05:16:54.682352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.682381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:38.065 [2024-11-21 05:16:54.682389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.418 ms 00:30:38.065 [2024-11-21 05:16:54.682395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.682460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.682468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:38.065 [2024-11-21 05:16:54.682475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:38.065 [2024-11-21 05:16:54.682483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.701265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.701326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:38.065 [2024-11-21 05:16:54.701349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.766 ms 00:30:38.065 [2024-11-21 05:16:54.701361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.703042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.703080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:38.065 [2024-11-21 05:16:54.703094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:30:38.065 [2024-11-21 05:16:54.703109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.719871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.719909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:38.065 [2024-11-21 05:16:54.719920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.727 ms 00:30:38.065 [2024-11-21 05:16:54.719930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.720042] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:38.065 [2024-11-21 05:16:54.720130] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:38.065 [2024-11-21 05:16:54.720215] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:38.065 [2024-11-21 05:16:54.720296] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:38.065 [2024-11-21 05:16:54.720303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.720310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:38.065 [2024-11-21 05:16:54.720317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:30:38.065 [2024-11-21 05:16:54.720326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.720359] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:38.065 [2024-11-21 05:16:54.720369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.720378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:38.065 [2024-11-21 05:16:54.720387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:38.065 [2024-11-21 05:16:54.720396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.722794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.722823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:38.065 [2024-11-21 05:16:54.722832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.381 ms 00:30:38.065 [2024-11-21 05:16:54.722842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.723348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.723373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:38.065 [2024-11-21 05:16:54.723381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:38.065 [2024-11-21 05:16:54.723388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.065 [2024-11-21 05:16:54.723445] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:38.065 [2024-11-21 05:16:54.723626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.065 [2024-11-21 05:16:54.723637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:38.065 [2024-11-21 05:16:54.723644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:30:38.065 [2024-11-21 05:16:54.723653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.012 [2024-11-21 05:16:55.698168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.012 [2024-11-21 05:16:55.698564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:39.012 [2024-11-21 05:16:55.698597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 974.259 ms 00:30:39.012 [2024-11-21 05:16:55.698637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.012 [2024-11-21 05:16:55.700971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.012 [2024-11-21 05:16:55.701024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:39.012 [2024-11-21 05:16:55.701055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.729 ms 00:30:39.012 [2024-11-21 05:16:55.701064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.012 [2024-11-21 05:16:55.702119] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:39.012 [2024-11-21 05:16:55.702168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.012 [2024-11-21 05:16:55.702180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:39.012 [2024-11-21 05:16:55.702191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.069 ms 00:30:39.012 [2024-11-21 05:16:55.702201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.012 [2024-11-21 05:16:55.702256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.012 [2024-11-21 05:16:55.702272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:39.012 [2024-11-21 05:16:55.702282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:39.012 [2024-11-21 05:16:55.702291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.012 [2024-11-21 05:16:55.702331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 978.878 ms, result 0 00:30:39.012 [2024-11-21 05:16:55.702390] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:39.012 [2024-11-21 05:16:55.702490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.012 [2024-11-21 05:16:55.702502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:39.012 [2024-11-21 05:16:55.702512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.101 ms 00:30:39.012 [2024-11-21 05:16:55.702521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.425642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.425703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:39.954 [2024-11-21 05:16:56.425718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 722.423 ms 00:30:39.954 [2024-11-21 05:16:56.425726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.427303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.427339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:39.954 [2024-11-21 05:16:56.427350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 00:30:39.954 [2024-11-21 05:16:56.427358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.427866] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:39.954 [2024-11-21 05:16:56.427892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.427901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:39.954 [2024-11-21 05:16:56.427911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.509 ms 00:30:39.954 [2024-11-21 05:16:56.427918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.428184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.428220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:39.954 [2024-11-21 05:16:56.428231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:39.954 [2024-11-21 05:16:56.428239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.428286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 725.897 ms, result 0 00:30:39.954 [2024-11-21 05:16:56.428336] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:39.954 [2024-11-21 05:16:56.428348] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:39.954 [2024-11-21 05:16:56.428358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.428373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:39.954 [2024-11-21 05:16:56.428382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1704.923 ms 00:30:39.954 [2024-11-21 05:16:56.428394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.428424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.428434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:39.954 [2024-11-21 05:16:56.428442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:39.954 [2024-11-21 05:16:56.428451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.437172] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:39.954 [2024-11-21 05:16:56.437296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.437307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:39.954 [2024-11-21 05:16:56.437321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.825 ms 00:30:39.954 [2024-11-21 05:16:56.437333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.438046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.438067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:39.954 [2024-11-21 05:16:56.438077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:30:39.954 [2024-11-21 05:16:56.438085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.954 [2024-11-21 05:16:56.440311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.954 [2024-11-21 05:16:56.440446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:39.954 [2024-11-21 05:16:56.440462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.209 ms 00:30:39.954 [2024-11-21 05:16:56.440470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.955 [2024-11-21 05:16:56.440513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.955 [2024-11-21 05:16:56.440524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:39.955 [2024-11-21 05:16:56.440531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:39.955 [2024-11-21 05:16:56.440539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.955 [2024-11-21 05:16:56.440662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.955 [2024-11-21 05:16:56.440677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:39.955 [2024-11-21 05:16:56.440689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:39.955 [2024-11-21 05:16:56.440697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.955 [2024-11-21 05:16:56.440718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.955 [2024-11-21 05:16:56.440726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:39.955 [2024-11-21 05:16:56.440735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:39.955 [2024-11-21 05:16:56.440741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.955 [2024-11-21 05:16:56.440777] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:39.955 [2024-11-21 05:16:56.440787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.955 [2024-11-21 05:16:56.440794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:39.955 [2024-11-21 05:16:56.440802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:39.955 [2024-11-21 05:16:56.440812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.955 [2024-11-21 05:16:56.440863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.955 [2024-11-21 05:16:56.440872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:39.955 [2024-11-21 05:16:56.440880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:39.955 [2024-11-21 05:16:56.440887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.955 [2024-11-21 05:16:56.441866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1791.913 ms, result 0 00:30:39.955 [2024-11-21 05:16:56.457662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.955 [2024-11-21 05:16:56.473660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:39.955 [2024-11-21 05:16:56.481796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:39.955 Validate MD5 checksum, iteration 1 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:39.955 05:16:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:40.216 [2024-11-21 05:16:56.691932] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:40.216 [2024-11-21 05:16:56.692726] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94666 ] 00:30:40.217 [2024-11-21 05:16:56.863169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.217 [2024-11-21 05:16:56.888437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.661  [2024-11-21T05:16:59.332Z] Copying: 489/1024 [MB] (489 MBps) [2024-11-21T05:16:59.902Z] Copying: 1024/1024 [MB] (average 555 MBps) 00:30:43.168 00:30:43.168 05:16:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:43.168 05:16:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5ce934df98440cbdd2652de6d4141de7 00:30:45.082 Validate MD5 checksum, iteration 2 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5ce934df98440cbdd2652de6d4141de7 != \5\c\e\9\3\4\d\f\9\8\4\4\0\c\b\d\d\2\6\5\2\d\e\6\d\4\1\4\1\d\e\7 ]] 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:45.082 05:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:45.341 [2024-11-21 05:17:01.842667] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:45.341 [2024-11-21 05:17:01.842955] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94722 ] 00:30:45.341 [2024-11-21 05:17:02.002444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.341 [2024-11-21 05:17:02.027013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.724  [2024-11-21T05:17:04.400Z] Copying: 530/1024 [MB] (530 MBps) [2024-11-21T05:17:04.659Z] Copying: 1024/1024 [MB] (average 573 MBps) 00:30:47.925 00:30:47.925 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:47.925 05:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=16e57c257b7de72df0eea28cef15de92 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 16e57c257b7de72df0eea28cef15de92 != \1\6\e\5\7\c\2\5\7\b\7\d\e\7\2\d\f\0\e\e\a\2\8\c\e\f\1\5\d\e\9\2 ]] 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94636 ]] 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94636 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94636 ']' 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94636 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94636 00:30:50.462 killing process with pid 94636 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94636' 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94636 00:30:50.462 05:17:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94636 00:30:50.462 [2024-11-21 05:17:06.986094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:50.462 [2024-11-21 05:17:06.990930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.990972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:50.462 [2024-11-21 05:17:06.990984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:50.462 [2024-11-21 05:17:06.990992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.991010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:50.462 [2024-11-21 05:17:06.991526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.991550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:50.462 [2024-11-21 05:17:06.991562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:30:50.462 [2024-11-21 05:17:06.991568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.991771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.991780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:50.462 [2024-11-21 05:17:06.991791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:30:50.462 [2024-11-21 05:17:06.991798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.992916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.992938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:50.462 [2024-11-21 05:17:06.992946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.104 ms 00:30:50.462 [2024-11-21 05:17:06.992952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.993844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.993953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:50.462 [2024-11-21 05:17:06.993966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.863 ms 00:30:50.462 [2024-11-21 05:17:06.993973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.995392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.995416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:50.462 [2024-11-21 05:17:06.995431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.377 ms 00:30:50.462 [2024-11-21 05:17:06.995440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.996805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.996832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:50.462 [2024-11-21 05:17:06.996840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.338 ms 00:30:50.462 [2024-11-21 05:17:06.996846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.996909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.996917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:50.462 [2024-11-21 05:17:06.996924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:50.462 [2024-11-21 05:17:06.996930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.998369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.998396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:50.462 [2024-11-21 05:17:06.998403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.419 ms 00:30:50.462 [2024-11-21 05:17:06.998409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:06.999617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:06.999640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:50.462 [2024-11-21 05:17:06.999647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.171 ms 00:30:50.462 [2024-11-21 05:17:06.999653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:07.000720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:07.000743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:50.462 [2024-11-21 05:17:07.000750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.042 ms 00:30:50.462 [2024-11-21 05:17:07.000756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:07.001854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.462 [2024-11-21 05:17:07.001881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:50.462 [2024-11-21 05:17:07.001888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.053 ms 00:30:50.462 [2024-11-21 05:17:07.001893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.462 [2024-11-21 05:17:07.001918] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:50.462 [2024-11-21 05:17:07.001938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:50.462 [2024-11-21 05:17:07.001947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:50.462 [2024-11-21 05:17:07.001954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:50.462 [2024-11-21 05:17:07.001961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:50.462 [2024-11-21 05:17:07.001966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:50.462 [2024-11-21 05:17:07.001972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.001979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.001985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.001991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.001997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:50.463 [2024-11-21 05:17:07.002051] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:50.463 [2024-11-21 05:17:07.002057] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8e7eeef6-f3e3-4bfc-9e6d-1a099f7fd873 00:30:50.463 [2024-11-21 05:17:07.002063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:50.463 [2024-11-21 05:17:07.002069] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:50.463 [2024-11-21 05:17:07.002075] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:50.463 [2024-11-21 05:17:07.002080] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:50.463 [2024-11-21 05:17:07.002086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:50.463 [2024-11-21 05:17:07.002092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:50.463 [2024-11-21 05:17:07.002098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:50.463 [2024-11-21 05:17:07.002102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:50.463 [2024-11-21 05:17:07.002107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:50.463 [2024-11-21 05:17:07.002112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.463 [2024-11-21 05:17:07.002121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:50.463 [2024-11-21 05:17:07.002127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:30:50.463 [2024-11-21 05:17:07.002133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.003801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.463 [2024-11-21 05:17:07.003828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:50.463 [2024-11-21 05:17:07.003837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.655 ms 00:30:50.463 [2024-11-21 05:17:07.003843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.003940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.463 [2024-11-21 05:17:07.003947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:50.463 [2024-11-21 05:17:07.003954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:30:50.463 [2024-11-21 05:17:07.003959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.010009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.010036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:50.463 [2024-11-21 05:17:07.010044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.010051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.010080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.010087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:50.463 [2024-11-21 05:17:07.010093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.010103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.010161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.010169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:50.463 [2024-11-21 05:17:07.010176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.010182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.010202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.010210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:50.463 [2024-11-21 05:17:07.010216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.010222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.021136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.021174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:50.463 [2024-11-21 05:17:07.021184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.021191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.029617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.029653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:50.463 [2024-11-21 05:17:07.029662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.029669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.029727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.029736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:50.463 [2024-11-21 05:17:07.029742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.029749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.029780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.029788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:50.463 [2024-11-21 05:17:07.029797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.029803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.029862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.029871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:50.463 [2024-11-21 05:17:07.029878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.029883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.029910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.029917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:50.463 [2024-11-21 05:17:07.029924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.029932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.029964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.029972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:50.463 [2024-11-21 05:17:07.029978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.029984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.030023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:50.463 [2024-11-21 05:17:07.030036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:50.463 [2024-11-21 05:17:07.030044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:50.463 [2024-11-21 05:17:07.030050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.463 [2024-11-21 05:17:07.030157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 39.198 ms, result 0 00:30:54.663 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:54.663 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:54.663 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:54.663 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:54.663 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:54.663 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:54.664 Remove shared memory files 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94432 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:54.664 00:30:54.664 real 1m17.793s 00:30:54.664 user 1m40.944s 00:30:54.664 sys 0m21.569s 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.664 ************************************ 00:30:54.664 END TEST ftl_upgrade_shutdown 00:30:54.664 05:17:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:54.664 ************************************ 00:30:54.664 05:17:10 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:54.664 05:17:10 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:54.664 05:17:10 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:54.664 05:17:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.664 05:17:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:54.664 ************************************ 00:30:54.664 START TEST ftl_restore_fast 00:30:54.664 ************************************ 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:54.664 * Looking for test storage... 00:30:54.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.664 --rc genhtml_branch_coverage=1 00:30:54.664 --rc genhtml_function_coverage=1 00:30:54.664 --rc genhtml_legend=1 00:30:54.664 --rc geninfo_all_blocks=1 00:30:54.664 --rc geninfo_unexecuted_blocks=1 00:30:54.664 00:30:54.664 ' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.664 --rc genhtml_branch_coverage=1 00:30:54.664 --rc genhtml_function_coverage=1 00:30:54.664 --rc genhtml_legend=1 00:30:54.664 --rc geninfo_all_blocks=1 00:30:54.664 --rc geninfo_unexecuted_blocks=1 00:30:54.664 00:30:54.664 ' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.664 --rc genhtml_branch_coverage=1 00:30:54.664 --rc genhtml_function_coverage=1 00:30:54.664 --rc genhtml_legend=1 00:30:54.664 --rc geninfo_all_blocks=1 00:30:54.664 --rc geninfo_unexecuted_blocks=1 00:30:54.664 00:30:54.664 ' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:54.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.664 --rc genhtml_branch_coverage=1 00:30:54.664 --rc genhtml_function_coverage=1 00:30:54.664 --rc genhtml_legend=1 00:30:54.664 --rc geninfo_all_blocks=1 00:30:54.664 --rc geninfo_unexecuted_blocks=1 00:30:54.664 00:30:54.664 ' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.F2xQjEPJcF 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94867 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94867 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94867 ']' 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.664 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.665 05:17:10 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:54.665 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.665 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.665 05:17:10 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:54.665 [2024-11-21 05:17:10.849060] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:30:54.665 [2024-11-21 05:17:10.849437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94867 ] 00:30:54.665 [2024-11-21 05:17:11.010732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.665 [2024-11-21 05:17:11.033863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:55.231 05:17:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:55.488 05:17:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:55.488 { 00:30:55.488 "name": "nvme0n1", 00:30:55.488 "aliases": [ 00:30:55.488 "9f6588f3-c8bc-4772-bc5a-4b4ee98f9ef6" 00:30:55.488 ], 00:30:55.488 "product_name": "NVMe disk", 00:30:55.488 "block_size": 4096, 00:30:55.488 "num_blocks": 1310720, 00:30:55.488 "uuid": "9f6588f3-c8bc-4772-bc5a-4b4ee98f9ef6", 00:30:55.488 "numa_id": -1, 00:30:55.488 "assigned_rate_limits": { 00:30:55.488 "rw_ios_per_sec": 0, 00:30:55.488 "rw_mbytes_per_sec": 0, 00:30:55.488 "r_mbytes_per_sec": 0, 00:30:55.488 "w_mbytes_per_sec": 0 00:30:55.488 }, 00:30:55.488 "claimed": true, 00:30:55.488 "claim_type": "read_many_write_one", 00:30:55.488 "zoned": false, 00:30:55.488 "supported_io_types": { 00:30:55.488 "read": true, 00:30:55.488 "write": true, 00:30:55.488 "unmap": true, 00:30:55.488 "flush": true, 00:30:55.488 "reset": true, 00:30:55.488 "nvme_admin": true, 00:30:55.488 "nvme_io": true, 00:30:55.488 "nvme_io_md": false, 00:30:55.488 "write_zeroes": true, 00:30:55.488 "zcopy": false, 00:30:55.488 "get_zone_info": false, 00:30:55.488 "zone_management": false, 00:30:55.488 "zone_append": false, 00:30:55.488 "compare": true, 00:30:55.488 "compare_and_write": false, 00:30:55.488 "abort": true, 00:30:55.488 "seek_hole": false, 00:30:55.488 "seek_data": false, 00:30:55.488 "copy": true, 00:30:55.488 "nvme_iov_md": false 00:30:55.488 }, 00:30:55.488 "driver_specific": { 00:30:55.488 "nvme": [ 00:30:55.488 { 00:30:55.488 "pci_address": "0000:00:11.0", 00:30:55.488 "trid": { 00:30:55.488 "trtype": "PCIe", 00:30:55.488 "traddr": "0000:00:11.0" 00:30:55.488 }, 00:30:55.488 "ctrlr_data": { 00:30:55.488 "cntlid": 0, 00:30:55.488 "vendor_id": "0x1b36", 00:30:55.488 "model_number": "QEMU NVMe Ctrl", 00:30:55.488 "serial_number": "12341", 00:30:55.488 "firmware_revision": "8.0.0", 00:30:55.488 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:55.488 "oacs": { 00:30:55.488 "security": 0, 00:30:55.488 "format": 1, 00:30:55.488 "firmware": 0, 00:30:55.488 "ns_manage": 1 00:30:55.488 }, 00:30:55.488 "multi_ctrlr": false, 00:30:55.488 "ana_reporting": false 00:30:55.488 }, 00:30:55.488 "vs": { 00:30:55.488 "nvme_version": "1.4" 00:30:55.488 }, 00:30:55.488 "ns_data": { 00:30:55.488 "id": 1, 00:30:55.488 "can_share": false 00:30:55.488 } 00:30:55.488 } 00:30:55.488 ], 00:30:55.488 "mp_policy": "active_passive" 00:30:55.488 } 00:30:55.488 } 00:30:55.488 ]' 00:30:55.488 05:17:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:55.488 05:17:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:55.488 05:17:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=972f0e6f-a741-4658-a819-2b99d284808c 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:55.746 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 972f0e6f-a741-4658-a819-2b99d284808c 00:30:56.004 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:56.263 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=68d3e78e-0ba6-4a68-bf30-8fbf4d4310ae 00:30:56.263 05:17:12 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 68d3e78e-0ba6-4a68-bf30-8fbf4d4310ae 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:56.522 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:56.781 { 00:30:56.781 "name": "c16abdc8-6aa4-49bc-a82e-681032fb731a", 00:30:56.781 "aliases": [ 00:30:56.781 "lvs/nvme0n1p0" 00:30:56.781 ], 00:30:56.781 "product_name": "Logical Volume", 00:30:56.781 "block_size": 4096, 00:30:56.781 "num_blocks": 26476544, 00:30:56.781 "uuid": "c16abdc8-6aa4-49bc-a82e-681032fb731a", 00:30:56.781 "assigned_rate_limits": { 00:30:56.781 "rw_ios_per_sec": 0, 00:30:56.781 "rw_mbytes_per_sec": 0, 00:30:56.781 "r_mbytes_per_sec": 0, 00:30:56.781 "w_mbytes_per_sec": 0 00:30:56.781 }, 00:30:56.781 "claimed": false, 00:30:56.781 "zoned": false, 00:30:56.781 "supported_io_types": { 00:30:56.781 "read": true, 00:30:56.781 "write": true, 00:30:56.781 "unmap": true, 00:30:56.781 "flush": false, 00:30:56.781 "reset": true, 00:30:56.781 "nvme_admin": false, 00:30:56.781 "nvme_io": false, 00:30:56.781 "nvme_io_md": false, 00:30:56.781 "write_zeroes": true, 00:30:56.781 "zcopy": false, 00:30:56.781 "get_zone_info": false, 00:30:56.781 "zone_management": false, 00:30:56.781 "zone_append": false, 00:30:56.781 "compare": false, 00:30:56.781 "compare_and_write": false, 00:30:56.781 "abort": false, 00:30:56.781 "seek_hole": true, 00:30:56.781 "seek_data": true, 00:30:56.781 "copy": false, 00:30:56.781 "nvme_iov_md": false 00:30:56.781 }, 00:30:56.781 "driver_specific": { 00:30:56.781 "lvol": { 00:30:56.781 "lvol_store_uuid": "68d3e78e-0ba6-4a68-bf30-8fbf4d4310ae", 00:30:56.781 "base_bdev": "nvme0n1", 00:30:56.781 "thin_provision": true, 00:30:56.781 "num_allocated_clusters": 0, 00:30:56.781 "snapshot": false, 00:30:56.781 "clone": false, 00:30:56.781 "esnap_clone": false 00:30:56.781 } 00:30:56.781 } 00:30:56.781 } 00:30:56.781 ]' 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:56.781 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:57.039 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:57.039 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:57.039 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:57.040 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:57.040 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:57.040 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:57.040 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:57.040 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:57.298 { 00:30:57.298 "name": "c16abdc8-6aa4-49bc-a82e-681032fb731a", 00:30:57.298 "aliases": [ 00:30:57.298 "lvs/nvme0n1p0" 00:30:57.298 ], 00:30:57.298 "product_name": "Logical Volume", 00:30:57.298 "block_size": 4096, 00:30:57.298 "num_blocks": 26476544, 00:30:57.298 "uuid": "c16abdc8-6aa4-49bc-a82e-681032fb731a", 00:30:57.298 "assigned_rate_limits": { 00:30:57.298 "rw_ios_per_sec": 0, 00:30:57.298 "rw_mbytes_per_sec": 0, 00:30:57.298 "r_mbytes_per_sec": 0, 00:30:57.298 "w_mbytes_per_sec": 0 00:30:57.298 }, 00:30:57.298 "claimed": false, 00:30:57.298 "zoned": false, 00:30:57.298 "supported_io_types": { 00:30:57.298 "read": true, 00:30:57.298 "write": true, 00:30:57.298 "unmap": true, 00:30:57.298 "flush": false, 00:30:57.298 "reset": true, 00:30:57.298 "nvme_admin": false, 00:30:57.298 "nvme_io": false, 00:30:57.298 "nvme_io_md": false, 00:30:57.298 "write_zeroes": true, 00:30:57.298 "zcopy": false, 00:30:57.298 "get_zone_info": false, 00:30:57.298 "zone_management": false, 00:30:57.298 "zone_append": false, 00:30:57.298 "compare": false, 00:30:57.298 "compare_and_write": false, 00:30:57.298 "abort": false, 00:30:57.298 "seek_hole": true, 00:30:57.298 "seek_data": true, 00:30:57.298 "copy": false, 00:30:57.298 "nvme_iov_md": false 00:30:57.298 }, 00:30:57.298 "driver_specific": { 00:30:57.298 "lvol": { 00:30:57.298 "lvol_store_uuid": "68d3e78e-0ba6-4a68-bf30-8fbf4d4310ae", 00:30:57.298 "base_bdev": "nvme0n1", 00:30:57.298 "thin_provision": true, 00:30:57.298 "num_allocated_clusters": 0, 00:30:57.298 "snapshot": false, 00:30:57.298 "clone": false, 00:30:57.298 "esnap_clone": false 00:30:57.298 } 00:30:57.298 } 00:30:57.298 } 00:30:57.298 ]' 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:57.298 05:17:13 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:57.557 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:57.557 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:57.557 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:57.557 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:57.557 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:57.557 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:57.557 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c16abdc8-6aa4-49bc-a82e-681032fb731a 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:57.817 { 00:30:57.817 "name": "c16abdc8-6aa4-49bc-a82e-681032fb731a", 00:30:57.817 "aliases": [ 00:30:57.817 "lvs/nvme0n1p0" 00:30:57.817 ], 00:30:57.817 "product_name": "Logical Volume", 00:30:57.817 "block_size": 4096, 00:30:57.817 "num_blocks": 26476544, 00:30:57.817 "uuid": "c16abdc8-6aa4-49bc-a82e-681032fb731a", 00:30:57.817 "assigned_rate_limits": { 00:30:57.817 "rw_ios_per_sec": 0, 00:30:57.817 "rw_mbytes_per_sec": 0, 00:30:57.817 "r_mbytes_per_sec": 0, 00:30:57.817 "w_mbytes_per_sec": 0 00:30:57.817 }, 00:30:57.817 "claimed": false, 00:30:57.817 "zoned": false, 00:30:57.817 "supported_io_types": { 00:30:57.817 "read": true, 00:30:57.817 "write": true, 00:30:57.817 "unmap": true, 00:30:57.817 "flush": false, 00:30:57.817 "reset": true, 00:30:57.817 "nvme_admin": false, 00:30:57.817 "nvme_io": false, 00:30:57.817 "nvme_io_md": false, 00:30:57.817 "write_zeroes": true, 00:30:57.817 "zcopy": false, 00:30:57.817 "get_zone_info": false, 00:30:57.817 "zone_management": false, 00:30:57.817 "zone_append": false, 00:30:57.817 "compare": false, 00:30:57.817 "compare_and_write": false, 00:30:57.817 "abort": false, 00:30:57.817 "seek_hole": true, 00:30:57.817 "seek_data": true, 00:30:57.817 "copy": false, 00:30:57.817 "nvme_iov_md": false 00:30:57.817 }, 00:30:57.817 "driver_specific": { 00:30:57.817 "lvol": { 00:30:57.817 "lvol_store_uuid": "68d3e78e-0ba6-4a68-bf30-8fbf4d4310ae", 00:30:57.817 "base_bdev": "nvme0n1", 00:30:57.817 "thin_provision": true, 00:30:57.817 "num_allocated_clusters": 0, 00:30:57.817 "snapshot": false, 00:30:57.817 "clone": false, 00:30:57.817 "esnap_clone": false 00:30:57.817 } 00:30:57.817 } 00:30:57.817 } 00:30:57.817 ]' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c16abdc8-6aa4-49bc-a82e-681032fb731a --l2p_dram_limit 10' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:57.817 05:17:14 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c16abdc8-6aa4-49bc-a82e-681032fb731a --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:57.817 [2024-11-21 05:17:14.535203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.817 [2024-11-21 05:17:14.535251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:57.817 [2024-11-21 05:17:14.535263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:57.817 [2024-11-21 05:17:14.535272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.817 [2024-11-21 05:17:14.535324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.817 [2024-11-21 05:17:14.535334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.817 [2024-11-21 05:17:14.535342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:57.817 [2024-11-21 05:17:14.535352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.817 [2024-11-21 05:17:14.535371] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:57.817 [2024-11-21 05:17:14.535663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:57.817 [2024-11-21 05:17:14.535687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.817 [2024-11-21 05:17:14.535696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.817 [2024-11-21 05:17:14.535703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:30:57.817 [2024-11-21 05:17:14.535712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.817 [2024-11-21 05:17:14.535741] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a7e9664a-2cab-4c59-99ec-1785228b7f52 00:30:57.817 [2024-11-21 05:17:14.537056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.817 [2024-11-21 05:17:14.537083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:57.817 [2024-11-21 05:17:14.537096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:57.817 [2024-11-21 05:17:14.537103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.817 [2024-11-21 05:17:14.544014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.817 [2024-11-21 05:17:14.544043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.817 [2024-11-21 05:17:14.544053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.844 ms 00:30:57.817 [2024-11-21 05:17:14.544060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.817 [2024-11-21 05:17:14.544131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.817 [2024-11-21 05:17:14.544144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.817 [2024-11-21 05:17:14.544152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:57.817 [2024-11-21 05:17:14.544159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.817 [2024-11-21 05:17:14.544204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.817 [2024-11-21 05:17:14.544211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:57.817 [2024-11-21 05:17:14.544219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:57.818 [2024-11-21 05:17:14.544225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.818 [2024-11-21 05:17:14.544245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:57.818 [2024-11-21 05:17:14.545948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.818 [2024-11-21 05:17:14.545976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.818 [2024-11-21 05:17:14.545984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:30:57.818 [2024-11-21 05:17:14.545992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.818 [2024-11-21 05:17:14.546024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.818 [2024-11-21 05:17:14.546035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:57.818 [2024-11-21 05:17:14.546041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:57.818 [2024-11-21 05:17:14.546051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.818 [2024-11-21 05:17:14.546064] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:57.818 [2024-11-21 05:17:14.546177] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:57.818 [2024-11-21 05:17:14.546192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:57.818 [2024-11-21 05:17:14.546209] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:57.818 [2024-11-21 05:17:14.546217] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546231] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:57.818 [2024-11-21 05:17:14.546246] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:57.818 [2024-11-21 05:17:14.546252] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:57.818 [2024-11-21 05:17:14.546260] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:57.818 [2024-11-21 05:17:14.546265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.818 [2024-11-21 05:17:14.546273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:57.818 [2024-11-21 05:17:14.546279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:30:57.818 [2024-11-21 05:17:14.546287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.818 [2024-11-21 05:17:14.546350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.818 [2024-11-21 05:17:14.546361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:57.818 [2024-11-21 05:17:14.546367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:57.818 [2024-11-21 05:17:14.546374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.818 [2024-11-21 05:17:14.546449] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:57.818 [2024-11-21 05:17:14.546459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:57.818 [2024-11-21 05:17:14.546467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:57.818 [2024-11-21 05:17:14.546491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:57.818 [2024-11-21 05:17:14.546509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:57.818 [2024-11-21 05:17:14.546521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:57.818 [2024-11-21 05:17:14.546528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:57.818 [2024-11-21 05:17:14.546535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:57.818 [2024-11-21 05:17:14.546543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:57.818 [2024-11-21 05:17:14.546549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:57.818 [2024-11-21 05:17:14.546559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:57.818 [2024-11-21 05:17:14.546571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:57.818 [2024-11-21 05:17:14.546589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:57.818 [2024-11-21 05:17:14.546621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:57.818 [2024-11-21 05:17:14.546641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:57.818 [2024-11-21 05:17:14.546665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:57.818 [2024-11-21 05:17:14.546684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:57.818 [2024-11-21 05:17:14.546699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:57.818 [2024-11-21 05:17:14.546707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:57.818 [2024-11-21 05:17:14.546713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:57.818 [2024-11-21 05:17:14.546720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:57.818 [2024-11-21 05:17:14.546726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:57.818 [2024-11-21 05:17:14.546734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:57.818 [2024-11-21 05:17:14.546747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:57.818 [2024-11-21 05:17:14.546753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546760] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:57.818 [2024-11-21 05:17:14.546768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:57.818 [2024-11-21 05:17:14.546778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:57.818 [2024-11-21 05:17:14.546785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:57.818 [2024-11-21 05:17:14.546797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:57.818 [2024-11-21 05:17:14.546804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:57.819 [2024-11-21 05:17:14.546812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:57.819 [2024-11-21 05:17:14.546818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:57.819 [2024-11-21 05:17:14.546826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:57.819 [2024-11-21 05:17:14.546832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:57.819 [2024-11-21 05:17:14.546843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:57.819 [2024-11-21 05:17:14.546854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.819 [2024-11-21 05:17:14.546864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:57.819 [2024-11-21 05:17:14.546871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:57.819 [2024-11-21 05:17:14.546879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:57.819 [2024-11-21 05:17:14.546885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:57.819 [2024-11-21 05:17:14.546894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:57.819 [2024-11-21 05:17:14.546900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:57.819 [2024-11-21 05:17:14.546910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:57.819 [2024-11-21 05:17:14.546917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:57.819 [2024-11-21 05:17:14.546925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:57.819 [2024-11-21 05:17:14.546931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:57.819 [2024-11-21 05:17:14.546939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:57.819 [2024-11-21 05:17:14.546945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:57.819 [2024-11-21 05:17:14.546953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:57.819 [2024-11-21 05:17:14.546959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:57.819 [2024-11-21 05:17:14.546967] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:57.819 [2024-11-21 05:17:14.546974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:57.819 [2024-11-21 05:17:14.546982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:57.819 [2024-11-21 05:17:14.546988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:57.819 [2024-11-21 05:17:14.546994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:57.819 [2024-11-21 05:17:14.546999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:57.819 [2024-11-21 05:17:14.547007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.819 [2024-11-21 05:17:14.547013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:57.819 [2024-11-21 05:17:14.547025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:30:57.819 [2024-11-21 05:17:14.547031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.819 [2024-11-21 05:17:14.547074] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:57.819 [2024-11-21 05:17:14.547084] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:02.023 [2024-11-21 05:17:18.039057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.023 [2024-11-21 05:17:18.039173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:02.023 [2024-11-21 05:17:18.039197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3491.955 ms 00:31:02.023 [2024-11-21 05:17:18.039217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.023 [2024-11-21 05:17:18.059285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.023 [2024-11-21 05:17:18.059356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:02.023 [2024-11-21 05:17:18.059377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.920 ms 00:31:02.023 [2024-11-21 05:17:18.059387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.023 [2024-11-21 05:17:18.059539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.023 [2024-11-21 05:17:18.059551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:02.023 [2024-11-21 05:17:18.059563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:31:02.023 [2024-11-21 05:17:18.059573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.023 [2024-11-21 05:17:18.077307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.023 [2024-11-21 05:17:18.077366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.023 [2024-11-21 05:17:18.077383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.651 ms 00:31:02.023 [2024-11-21 05:17:18.077394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.077446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.077463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.024 [2024-11-21 05:17:18.077476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:02.024 [2024-11-21 05:17:18.077486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.078243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.078292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.024 [2024-11-21 05:17:18.078306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:31:02.024 [2024-11-21 05:17:18.078315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.078446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.078461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.024 [2024-11-21 05:17:18.078474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:31:02.024 [2024-11-21 05:17:18.078483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.090515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.090575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.024 [2024-11-21 05:17:18.090589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.000 ms 00:31:02.024 [2024-11-21 05:17:18.090599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.102582] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:02.024 [2024-11-21 05:17:18.107719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.107770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:02.024 [2024-11-21 05:17:18.107789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.008 ms 00:31:02.024 [2024-11-21 05:17:18.107801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.202658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.202750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:02.024 [2024-11-21 05:17:18.202772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.816 ms 00:31:02.024 [2024-11-21 05:17:18.202789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.203008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.203024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:02.024 [2024-11-21 05:17:18.203034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:31:02.024 [2024-11-21 05:17:18.203045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.210115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.210178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:02.024 [2024-11-21 05:17:18.210192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.046 ms 00:31:02.024 [2024-11-21 05:17:18.210208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.215978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.216038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:02.024 [2024-11-21 05:17:18.216051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.739 ms 00:31:02.024 [2024-11-21 05:17:18.216063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.216433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.216455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:02.024 [2024-11-21 05:17:18.216466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:31:02.024 [2024-11-21 05:17:18.216480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.258640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.258707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:02.024 [2024-11-21 05:17:18.258722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.135 ms 00:31:02.024 [2024-11-21 05:17:18.258740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.267433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.267496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:02.024 [2024-11-21 05:17:18.267508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.627 ms 00:31:02.024 [2024-11-21 05:17:18.267520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.274339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.274397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:02.024 [2024-11-21 05:17:18.274407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.766 ms 00:31:02.024 [2024-11-21 05:17:18.274419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.281659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.281716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:02.024 [2024-11-21 05:17:18.281727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.189 ms 00:31:02.024 [2024-11-21 05:17:18.281742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.281816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.281830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:02.024 [2024-11-21 05:17:18.281841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:02.024 [2024-11-21 05:17:18.281864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.281952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.281966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:02.024 [2024-11-21 05:17:18.281976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:02.024 [2024-11-21 05:17:18.281988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.283428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3747.637 ms, result 0 00:31:02.024 { 00:31:02.024 "name": "ftl0", 00:31:02.024 "uuid": "a7e9664a-2cab-4c59-99ec-1785228b7f52" 00:31:02.024 } 00:31:02.024 05:17:18 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:02.024 05:17:18 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:02.024 05:17:18 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:02.024 05:17:18 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:02.024 [2024-11-21 05:17:18.713108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.713163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:02.024 [2024-11-21 05:17:18.713178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:02.024 [2024-11-21 05:17:18.713189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.713217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:02.024 [2024-11-21 05:17:18.713897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.713935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:02.024 [2024-11-21 05:17:18.713945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:31:02.024 [2024-11-21 05:17:18.713955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.714233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.024 [2024-11-21 05:17:18.714269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:02.024 [2024-11-21 05:17:18.714278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:31:02.024 [2024-11-21 05:17:18.714293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.024 [2024-11-21 05:17:18.717540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.717564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:02.025 [2024-11-21 05:17:18.717574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:31:02.025 [2024-11-21 05:17:18.717584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.723707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.723748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:02.025 [2024-11-21 05:17:18.723758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:31:02.025 [2024-11-21 05:17:18.723768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.726364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.726408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:02.025 [2024-11-21 05:17:18.726417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:31:02.025 [2024-11-21 05:17:18.726427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.732002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.732044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:02.025 [2024-11-21 05:17:18.732055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.540 ms 00:31:02.025 [2024-11-21 05:17:18.732065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.732199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.732214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:02.025 [2024-11-21 05:17:18.732227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:02.025 [2024-11-21 05:17:18.732236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.734547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.734585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:02.025 [2024-11-21 05:17:18.734595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:31:02.025 [2024-11-21 05:17:18.734605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.736840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.736880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:02.025 [2024-11-21 05:17:18.736888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:31:02.025 [2024-11-21 05:17:18.736897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.738745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.738781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:02.025 [2024-11-21 05:17:18.738790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.813 ms 00:31:02.025 [2024-11-21 05:17:18.738799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.740801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.025 [2024-11-21 05:17:18.740837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:02.025 [2024-11-21 05:17:18.740846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.942 ms 00:31:02.025 [2024-11-21 05:17:18.740854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.025 [2024-11-21 05:17:18.740886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:02.025 [2024-11-21 05:17:18.740903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.740998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:02.025 [2024-11-21 05:17:18.741333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:02.026 [2024-11-21 05:17:18.741810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:02.026 [2024-11-21 05:17:18.741818] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7e9664a-2cab-4c59-99ec-1785228b7f52 00:31:02.026 [2024-11-21 05:17:18.741828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:02.026 [2024-11-21 05:17:18.741836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:02.026 [2024-11-21 05:17:18.741846] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:02.026 [2024-11-21 05:17:18.741854] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:02.026 [2024-11-21 05:17:18.741863] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:02.026 [2024-11-21 05:17:18.741874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:02.026 [2024-11-21 05:17:18.741884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:02.026 [2024-11-21 05:17:18.741890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:02.026 [2024-11-21 05:17:18.741898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:02.026 [2024-11-21 05:17:18.741906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.026 [2024-11-21 05:17:18.741916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:02.026 [2024-11-21 05:17:18.741924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:31:02.026 [2024-11-21 05:17:18.741934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.026 [2024-11-21 05:17:18.743943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.026 [2024-11-21 05:17:18.743977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:02.026 [2024-11-21 05:17:18.743987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:31:02.026 [2024-11-21 05:17:18.744001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.026 [2024-11-21 05:17:18.744106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.026 [2024-11-21 05:17:18.744118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:02.026 [2024-11-21 05:17:18.744127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:31:02.026 [2024-11-21 05:17:18.744137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.026 [2024-11-21 05:17:18.751253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.026 [2024-11-21 05:17:18.751290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.026 [2024-11-21 05:17:18.751300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.026 [2024-11-21 05:17:18.751313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.026 [2024-11-21 05:17:18.751370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.026 [2024-11-21 05:17:18.751381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.026 [2024-11-21 05:17:18.751392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.026 [2024-11-21 05:17:18.751402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.026 [2024-11-21 05:17:18.751457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.027 [2024-11-21 05:17:18.751475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.027 [2024-11-21 05:17:18.751483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.027 [2024-11-21 05:17:18.751492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.027 [2024-11-21 05:17:18.751511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.027 [2024-11-21 05:17:18.751521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.027 [2024-11-21 05:17:18.751528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.027 [2024-11-21 05:17:18.751537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.764338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.764387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.289 [2024-11-21 05:17:18.764397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.764411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.774905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.774957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:02.289 [2024-11-21 05:17:18.774968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.774978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.775060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.775075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:02.289 [2024-11-21 05:17:18.775084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.775094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.775139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.775154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:02.289 [2024-11-21 05:17:18.775167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.775177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.775249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.775261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:02.289 [2024-11-21 05:17:18.775269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.775279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.775310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.775324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:02.289 [2024-11-21 05:17:18.775332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.775341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.775383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.775397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:02.289 [2024-11-21 05:17:18.775405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.775418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.775466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:02.289 [2024-11-21 05:17:18.775489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:02.289 [2024-11-21 05:17:18.775498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:02.289 [2024-11-21 05:17:18.775507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.289 [2024-11-21 05:17:18.775673] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.511 ms, result 0 00:31:02.289 true 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94867 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94867 ']' 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94867 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94867 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:02.289 killing process with pid 94867 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94867' 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94867 00:31:02.289 05:17:18 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94867 00:31:10.438 05:17:25 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:12.981 262144+0 records in 00:31:12.981 262144+0 records out 00:31:12.981 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.53484 s, 304 MB/s 00:31:12.981 05:17:29 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:14.894 05:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:14.894 [2024-11-21 05:17:31.336367] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:31:14.894 [2024-11-21 05:17:31.336484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95072 ] 00:31:14.894 [2024-11-21 05:17:31.491462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.894 [2024-11-21 05:17:31.518826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.157 [2024-11-21 05:17:31.637783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.157 [2024-11-21 05:17:31.637855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.157 [2024-11-21 05:17:31.796034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.157 [2024-11-21 05:17:31.796086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:15.157 [2024-11-21 05:17:31.796102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:15.157 [2024-11-21 05:17:31.796110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.157 [2024-11-21 05:17:31.796162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.157 [2024-11-21 05:17:31.796172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:15.157 [2024-11-21 05:17:31.796185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:15.157 [2024-11-21 05:17:31.796193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.796220] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:15.158 [2024-11-21 05:17:31.796551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:15.158 [2024-11-21 05:17:31.796583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.796592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:15.158 [2024-11-21 05:17:31.796601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:31:15.158 [2024-11-21 05:17:31.796625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.798215] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:15.158 [2024-11-21 05:17:31.801674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.801709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:15.158 [2024-11-21 05:17:31.801720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:31:15.158 [2024-11-21 05:17:31.801733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.801792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.801806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:15.158 [2024-11-21 05:17:31.801815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:15.158 [2024-11-21 05:17:31.801822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.809701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.809730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:15.158 [2024-11-21 05:17:31.809749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.830 ms 00:31:15.158 [2024-11-21 05:17:31.809757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.809848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.809857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:15.158 [2024-11-21 05:17:31.809865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:31:15.158 [2024-11-21 05:17:31.809874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.809923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.809933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:15.158 [2024-11-21 05:17:31.809942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:15.158 [2024-11-21 05:17:31.809948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.809975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:15.158 [2024-11-21 05:17:31.812046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.812073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:15.158 [2024-11-21 05:17:31.812082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.078 ms 00:31:15.158 [2024-11-21 05:17:31.812092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.812128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.812142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:15.158 [2024-11-21 05:17:31.812150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:15.158 [2024-11-21 05:17:31.812158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.812194] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:15.158 [2024-11-21 05:17:31.812215] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:15.158 [2024-11-21 05:17:31.812256] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:15.158 [2024-11-21 05:17:31.812276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:15.158 [2024-11-21 05:17:31.812383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:15.158 [2024-11-21 05:17:31.812401] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:15.158 [2024-11-21 05:17:31.812412] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:15.158 [2024-11-21 05:17:31.812426] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812436] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812444] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:15.158 [2024-11-21 05:17:31.812452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:15.158 [2024-11-21 05:17:31.812460] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:15.158 [2024-11-21 05:17:31.812467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:15.158 [2024-11-21 05:17:31.812475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.812486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:15.158 [2024-11-21 05:17:31.812493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:31:15.158 [2024-11-21 05:17:31.812503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.812586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.158 [2024-11-21 05:17:31.812596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:15.158 [2024-11-21 05:17:31.812604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:15.158 [2024-11-21 05:17:31.812624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.158 [2024-11-21 05:17:31.812726] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:15.158 [2024-11-21 05:17:31.812748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:15.158 [2024-11-21 05:17:31.812758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:15.158 [2024-11-21 05:17:31.812791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:15.158 [2024-11-21 05:17:31.812815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.158 [2024-11-21 05:17:31.812836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:15.158 [2024-11-21 05:17:31.812844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:15.158 [2024-11-21 05:17:31.812852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.158 [2024-11-21 05:17:31.812860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:15.158 [2024-11-21 05:17:31.812868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:15.158 [2024-11-21 05:17:31.812876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:15.158 [2024-11-21 05:17:31.812893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:15.158 [2024-11-21 05:17:31.812918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:15.158 [2024-11-21 05:17:31.812941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:15.158 [2024-11-21 05:17:31.812969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.158 [2024-11-21 05:17:31.812985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:15.158 [2024-11-21 05:17:31.812992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:15.158 [2024-11-21 05:17:31.812999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.158 [2024-11-21 05:17:31.813007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:15.158 [2024-11-21 05:17:31.813015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:15.158 [2024-11-21 05:17:31.813022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.158 [2024-11-21 05:17:31.813030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:15.158 [2024-11-21 05:17:31.813037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:15.158 [2024-11-21 05:17:31.813045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.158 [2024-11-21 05:17:31.813052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:15.158 [2024-11-21 05:17:31.813060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:15.158 [2024-11-21 05:17:31.813067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.158 [2024-11-21 05:17:31.813074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:15.158 [2024-11-21 05:17:31.813082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:15.158 [2024-11-21 05:17:31.813092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.158 [2024-11-21 05:17:31.813100] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:15.158 [2024-11-21 05:17:31.813109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:15.158 [2024-11-21 05:17:31.813119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.158 [2024-11-21 05:17:31.813128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.158 [2024-11-21 05:17:31.813137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:15.159 [2024-11-21 05:17:31.813145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:15.159 [2024-11-21 05:17:31.813156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:15.159 [2024-11-21 05:17:31.813164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:15.159 [2024-11-21 05:17:31.813171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:15.159 [2024-11-21 05:17:31.813179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:15.159 [2024-11-21 05:17:31.813189] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:15.159 [2024-11-21 05:17:31.813198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.159 [2024-11-21 05:17:31.813206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:15.159 [2024-11-21 05:17:31.813213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:15.159 [2024-11-21 05:17:31.813220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:15.159 [2024-11-21 05:17:31.813238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:15.159 [2024-11-21 05:17:31.813246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:15.159 [2024-11-21 05:17:31.813254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:15.159 [2024-11-21 05:17:31.813261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:15.159 [2024-11-21 05:17:31.813270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:15.159 [2024-11-21 05:17:31.813277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:15.159 [2024-11-21 05:17:31.813285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:15.159 [2024-11-21 05:17:31.813293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:15.159 [2024-11-21 05:17:31.813299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:15.159 [2024-11-21 05:17:31.813306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:15.159 [2024-11-21 05:17:31.813314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:15.159 [2024-11-21 05:17:31.813321] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:15.159 [2024-11-21 05:17:31.813330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.159 [2024-11-21 05:17:31.813338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:15.159 [2024-11-21 05:17:31.813346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:15.159 [2024-11-21 05:17:31.813353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:15.159 [2024-11-21 05:17:31.813363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:15.159 [2024-11-21 05:17:31.813371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.813379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:15.159 [2024-11-21 05:17:31.813387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:31:15.159 [2024-11-21 05:17:31.813395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.827799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.827836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:15.159 [2024-11-21 05:17:31.827849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.355 ms 00:31:15.159 [2024-11-21 05:17:31.827862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.827950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.827958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:15.159 [2024-11-21 05:17:31.827967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:15.159 [2024-11-21 05:17:31.827974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.852891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.852956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:15.159 [2024-11-21 05:17:31.852975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.862 ms 00:31:15.159 [2024-11-21 05:17:31.852989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.853051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.853076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:15.159 [2024-11-21 05:17:31.853090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:15.159 [2024-11-21 05:17:31.853108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.853796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.853844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:15.159 [2024-11-21 05:17:31.853861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:31:15.159 [2024-11-21 05:17:31.853874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.854087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.854106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:15.159 [2024-11-21 05:17:31.854130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:31:15.159 [2024-11-21 05:17:31.854142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.863732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.863774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:15.159 [2024-11-21 05:17:31.863791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.560 ms 00:31:15.159 [2024-11-21 05:17:31.863804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.868029] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:15.159 [2024-11-21 05:17:31.868078] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:15.159 [2024-11-21 05:17:31.868090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.868099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:15.159 [2024-11-21 05:17:31.868108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.180 ms 00:31:15.159 [2024-11-21 05:17:31.868116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.883764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.883810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:15.159 [2024-11-21 05:17:31.883826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.592 ms 00:31:15.159 [2024-11-21 05:17:31.883835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.886483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.886529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:15.159 [2024-11-21 05:17:31.886540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:31:15.159 [2024-11-21 05:17:31.886548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.159 [2024-11-21 05:17:31.889037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.159 [2024-11-21 05:17:31.889081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:15.159 [2024-11-21 05:17:31.889092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:31:15.159 [2024-11-21 05:17:31.889100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.889485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.889506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:15.422 [2024-11-21 05:17:31.889516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:31:15.422 [2024-11-21 05:17:31.889525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.918127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.918208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:15.422 [2024-11-21 05:17:31.918224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.574 ms 00:31:15.422 [2024-11-21 05:17:31.918234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.926715] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:15.422 [2024-11-21 05:17:31.930350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.930399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:15.422 [2024-11-21 05:17:31.930420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.058 ms 00:31:15.422 [2024-11-21 05:17:31.930430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.930518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.930529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:15.422 [2024-11-21 05:17:31.930544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:15.422 [2024-11-21 05:17:31.930553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.930657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.930674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:15.422 [2024-11-21 05:17:31.930688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:15.422 [2024-11-21 05:17:31.930699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.930725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.930738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:15.422 [2024-11-21 05:17:31.930748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:15.422 [2024-11-21 05:17:31.930759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.930803] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:15.422 [2024-11-21 05:17:31.930815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.930824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:15.422 [2024-11-21 05:17:31.930833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:15.422 [2024-11-21 05:17:31.930841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.937674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.937725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:15.422 [2024-11-21 05:17:31.937738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:31:15.422 [2024-11-21 05:17:31.937747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.937840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.422 [2024-11-21 05:17:31.937856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:15.422 [2024-11-21 05:17:31.937867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:15.422 [2024-11-21 05:17:31.937877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.422 [2024-11-21 05:17:31.939686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 143.063 ms, result 0 00:31:16.367  [2024-11-21T05:17:34.044Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-21T05:17:34.986Z] Copying: 36/1024 [MB] (18 MBps) [2024-11-21T05:17:36.366Z] Copying: 53/1024 [MB] (16 MBps) [2024-11-21T05:17:36.982Z] Copying: 71/1024 [MB] (18 MBps) [2024-11-21T05:17:37.969Z] Copying: 91/1024 [MB] (20 MBps) [2024-11-21T05:17:39.354Z] Copying: 109/1024 [MB] (18 MBps) [2024-11-21T05:17:39.958Z] Copying: 126/1024 [MB] (16 MBps) [2024-11-21T05:17:41.346Z] Copying: 144/1024 [MB] (18 MBps) [2024-11-21T05:17:42.289Z] Copying: 163/1024 [MB] (18 MBps) [2024-11-21T05:17:43.236Z] Copying: 181/1024 [MB] (18 MBps) [2024-11-21T05:17:44.181Z] Copying: 197/1024 [MB] (15 MBps) [2024-11-21T05:17:45.127Z] Copying: 211/1024 [MB] (14 MBps) [2024-11-21T05:17:46.070Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-21T05:17:47.015Z] Copying: 239/1024 [MB] (15 MBps) [2024-11-21T05:17:47.957Z] Copying: 259/1024 [MB] (20 MBps) [2024-11-21T05:17:49.344Z] Copying: 276/1024 [MB] (17 MBps) [2024-11-21T05:17:50.284Z] Copying: 292/1024 [MB] (15 MBps) [2024-11-21T05:17:51.229Z] Copying: 312/1024 [MB] (19 MBps) [2024-11-21T05:17:52.173Z] Copying: 332/1024 [MB] (20 MBps) [2024-11-21T05:17:53.115Z] Copying: 346/1024 [MB] (13 MBps) [2024-11-21T05:17:54.059Z] Copying: 366/1024 [MB] (19 MBps) [2024-11-21T05:17:55.004Z] Copying: 384/1024 [MB] (18 MBps) [2024-11-21T05:17:56.393Z] Copying: 399/1024 [MB] (15 MBps) [2024-11-21T05:17:56.965Z] Copying: 415/1024 [MB] (15 MBps) [2024-11-21T05:17:58.354Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-21T05:17:59.302Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-21T05:18:00.246Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-21T05:18:01.190Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-21T05:18:02.134Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-21T05:18:03.074Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-21T05:18:04.008Z] Copying: 498/1024 [MB] (21 MBps) [2024-11-21T05:18:05.386Z] Copying: 552/1024 [MB] (53 MBps) [2024-11-21T05:18:05.958Z] Copying: 597/1024 [MB] (45 MBps) [2024-11-21T05:18:07.342Z] Copying: 612/1024 [MB] (15 MBps) [2024-11-21T05:18:08.286Z] Copying: 629/1024 [MB] (16 MBps) [2024-11-21T05:18:09.299Z] Copying: 640/1024 [MB] (10 MBps) [2024-11-21T05:18:10.284Z] Copying: 657/1024 [MB] (16 MBps) [2024-11-21T05:18:11.227Z] Copying: 670/1024 [MB] (12 MBps) [2024-11-21T05:18:12.169Z] Copying: 684/1024 [MB] (14 MBps) [2024-11-21T05:18:13.111Z] Copying: 703/1024 [MB] (19 MBps) [2024-11-21T05:18:14.052Z] Copying: 716/1024 [MB] (13 MBps) [2024-11-21T05:18:14.994Z] Copying: 734/1024 [MB] (17 MBps) [2024-11-21T05:18:16.378Z] Copying: 758/1024 [MB] (23 MBps) [2024-11-21T05:18:17.322Z] Copying: 779/1024 [MB] (21 MBps) [2024-11-21T05:18:18.267Z] Copying: 797/1024 [MB] (18 MBps) [2024-11-21T05:18:19.212Z] Copying: 814/1024 [MB] (17 MBps) [2024-11-21T05:18:20.156Z] Copying: 831/1024 [MB] (16 MBps) [2024-11-21T05:18:21.098Z] Copying: 849/1024 [MB] (18 MBps) [2024-11-21T05:18:22.043Z] Copying: 867/1024 [MB] (18 MBps) [2024-11-21T05:18:22.990Z] Copying: 885/1024 [MB] (17 MBps) [2024-11-21T05:18:24.378Z] Copying: 901/1024 [MB] (15 MBps) [2024-11-21T05:18:24.951Z] Copying: 919/1024 [MB] (18 MBps) [2024-11-21T05:18:26.338Z] Copying: 935/1024 [MB] (15 MBps) [2024-11-21T05:18:27.285Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-21T05:18:28.231Z] Copying: 959/1024 [MB] (12 MBps) [2024-11-21T05:18:29.177Z] Copying: 973/1024 [MB] (13 MBps) [2024-11-21T05:18:30.114Z] Copying: 1007168/1048576 [kB] (10236 kBps) [2024-11-21T05:18:30.114Z] Copying: 1023/1024 [MB] (39 MBps) [2024-11-21T05:18:30.114Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-21 05:18:29.966403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.380 [2024-11-21 05:18:29.966450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:13.380 [2024-11-21 05:18:29.966470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:13.380 [2024-11-21 05:18:29.966478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.380 [2024-11-21 05:18:29.966502] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:13.380 [2024-11-21 05:18:29.967083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.380 [2024-11-21 05:18:29.967109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:13.380 [2024-11-21 05:18:29.967119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:32:13.380 [2024-11-21 05:18:29.967127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.380 [2024-11-21 05:18:29.968648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.381 [2024-11-21 05:18:29.968687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:13.381 [2024-11-21 05:18:29.968697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:32:13.381 [2024-11-21 05:18:29.968704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.381 [2024-11-21 05:18:29.968733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.381 [2024-11-21 05:18:29.968742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:13.381 [2024-11-21 05:18:29.968750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:13.381 [2024-11-21 05:18:29.968758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.381 [2024-11-21 05:18:29.968805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.381 [2024-11-21 05:18:29.968814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:13.381 [2024-11-21 05:18:29.968822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:13.381 [2024-11-21 05:18:29.968829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.381 [2024-11-21 05:18:29.968842] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:13.381 [2024-11-21 05:18:29.968854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.968998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:13.381 [2024-11-21 05:18:29.969417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:13.382 [2024-11-21 05:18:29.969677] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:13.382 [2024-11-21 05:18:29.969685] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7e9664a-2cab-4c59-99ec-1785228b7f52 00:32:13.382 [2024-11-21 05:18:29.969693] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:13.382 [2024-11-21 05:18:29.969700] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:13.382 [2024-11-21 05:18:29.969707] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:13.382 [2024-11-21 05:18:29.969714] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:13.382 [2024-11-21 05:18:29.969729] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:13.382 [2024-11-21 05:18:29.969742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:13.382 [2024-11-21 05:18:29.969749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:13.382 [2024-11-21 05:18:29.969756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:13.382 [2024-11-21 05:18:29.969762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:13.382 [2024-11-21 05:18:29.969769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.382 [2024-11-21 05:18:29.969777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:13.382 [2024-11-21 05:18:29.969785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:32:13.382 [2024-11-21 05:18:29.969795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.971569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.382 [2024-11-21 05:18:29.971596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:13.382 [2024-11-21 05:18:29.971606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:32:13.382 [2024-11-21 05:18:29.971625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.971727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.382 [2024-11-21 05:18:29.971742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:13.382 [2024-11-21 05:18:29.971754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:32:13.382 [2024-11-21 05:18:29.971763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.977983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.978018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.382 [2024-11-21 05:18:29.978028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.382 [2024-11-21 05:18:29.978036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.978093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.978101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.382 [2024-11-21 05:18:29.978113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.382 [2024-11-21 05:18:29.978121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.978170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.978179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.382 [2024-11-21 05:18:29.978187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.382 [2024-11-21 05:18:29.978194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.978209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.978217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.382 [2024-11-21 05:18:29.978225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.382 [2024-11-21 05:18:29.978235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.989543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.989594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:13.382 [2024-11-21 05:18:29.989604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.382 [2024-11-21 05:18:29.989636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.998817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.998860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.382 [2024-11-21 05:18:29.998872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.382 [2024-11-21 05:18:29.998890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.998941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.998955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.382 [2024-11-21 05:18:29.998963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.382 [2024-11-21 05:18:29.998971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.382 [2024-11-21 05:18:29.999020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.382 [2024-11-21 05:18:29.999029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.383 [2024-11-21 05:18:29.999037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.383 [2024-11-21 05:18:29.999045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.383 [2024-11-21 05:18:29.999108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.383 [2024-11-21 05:18:29.999117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.383 [2024-11-21 05:18:29.999126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.383 [2024-11-21 05:18:29.999134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.383 [2024-11-21 05:18:29.999158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.383 [2024-11-21 05:18:29.999166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:13.383 [2024-11-21 05:18:29.999175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.383 [2024-11-21 05:18:29.999183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.383 [2024-11-21 05:18:29.999222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.383 [2024-11-21 05:18:29.999231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.383 [2024-11-21 05:18:29.999239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.383 [2024-11-21 05:18:29.999247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.383 [2024-11-21 05:18:29.999288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.383 [2024-11-21 05:18:29.999299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.383 [2024-11-21 05:18:29.999307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.383 [2024-11-21 05:18:29.999315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.383 [2024-11-21 05:18:29.999442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 33.005 ms, result 0 00:32:13.955 00:32:13.955 00:32:13.955 05:18:30 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:13.955 [2024-11-21 05:18:30.609810] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:32:13.955 [2024-11-21 05:18:30.609959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95668 ] 00:32:14.217 [2024-11-21 05:18:30.772722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.217 [2024-11-21 05:18:30.814327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.480 [2024-11-21 05:18:30.966579] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:14.480 [2024-11-21 05:18:30.966710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:14.481 [2024-11-21 05:18:31.131219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.131295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:14.481 [2024-11-21 05:18:31.131313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:14.481 [2024-11-21 05:18:31.131326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.131400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.131412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:14.481 [2024-11-21 05:18:31.131423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:14.481 [2024-11-21 05:18:31.131439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.131465] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:14.481 [2024-11-21 05:18:31.131792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:14.481 [2024-11-21 05:18:31.131817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.131826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:14.481 [2024-11-21 05:18:31.131837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:32:14.481 [2024-11-21 05:18:31.131848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.132336] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:14.481 [2024-11-21 05:18:31.132386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.132395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:14.481 [2024-11-21 05:18:31.132406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:14.481 [2024-11-21 05:18:31.132414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.132482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.132496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:14.481 [2024-11-21 05:18:31.132510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:14.481 [2024-11-21 05:18:31.132525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.132813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.132825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:14.481 [2024-11-21 05:18:31.132834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:32:14.481 [2024-11-21 05:18:31.132843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.132935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.132948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:14.481 [2024-11-21 05:18:31.132957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:14.481 [2024-11-21 05:18:31.132965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.132993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.133010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:14.481 [2024-11-21 05:18:31.133019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:14.481 [2024-11-21 05:18:31.133027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.133050] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:14.481 [2024-11-21 05:18:31.135952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.135997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:14.481 [2024-11-21 05:18:31.136008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:32:14.481 [2024-11-21 05:18:31.136016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.136053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.136061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:14.481 [2024-11-21 05:18:31.136077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:14.481 [2024-11-21 05:18:31.136084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.136138] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:14.481 [2024-11-21 05:18:31.136163] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:14.481 [2024-11-21 05:18:31.136206] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:14.481 [2024-11-21 05:18:31.136228] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:14.481 [2024-11-21 05:18:31.136343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:14.481 [2024-11-21 05:18:31.136354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:14.481 [2024-11-21 05:18:31.136370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:14.481 [2024-11-21 05:18:31.136384] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:14.481 [2024-11-21 05:18:31.136397] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:14.481 [2024-11-21 05:18:31.136408] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:14.481 [2024-11-21 05:18:31.136419] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:14.481 [2024-11-21 05:18:31.136428] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:14.481 [2024-11-21 05:18:31.136435] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:14.481 [2024-11-21 05:18:31.136443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.136451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:14.481 [2024-11-21 05:18:31.136459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:32:14.481 [2024-11-21 05:18:31.136467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.136550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.481 [2024-11-21 05:18:31.136559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:14.481 [2024-11-21 05:18:31.136570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:14.481 [2024-11-21 05:18:31.136580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.481 [2024-11-21 05:18:31.136713] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:14.481 [2024-11-21 05:18:31.136728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:14.481 [2024-11-21 05:18:31.136743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:14.481 [2024-11-21 05:18:31.136752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.481 [2024-11-21 05:18:31.136762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:14.481 [2024-11-21 05:18:31.136771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:14.481 [2024-11-21 05:18:31.136779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:14.481 [2024-11-21 05:18:31.136794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:14.481 [2024-11-21 05:18:31.136802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:14.481 [2024-11-21 05:18:31.136811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:14.481 [2024-11-21 05:18:31.136820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:14.481 [2024-11-21 05:18:31.136830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:14.481 [2024-11-21 05:18:31.136839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:14.481 [2024-11-21 05:18:31.136848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:14.481 [2024-11-21 05:18:31.136856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:14.481 [2024-11-21 05:18:31.136864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.481 [2024-11-21 05:18:31.136873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:14.481 [2024-11-21 05:18:31.136881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:14.481 [2024-11-21 05:18:31.136893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.481 [2024-11-21 05:18:31.136902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:14.481 [2024-11-21 05:18:31.136911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:14.482 [2024-11-21 05:18:31.136919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.482 [2024-11-21 05:18:31.136927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:14.482 [2024-11-21 05:18:31.136936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:14.482 [2024-11-21 05:18:31.136944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.482 [2024-11-21 05:18:31.136952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:14.482 [2024-11-21 05:18:31.136960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:14.482 [2024-11-21 05:18:31.136968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.482 [2024-11-21 05:18:31.136977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:14.482 [2024-11-21 05:18:31.136985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:14.482 [2024-11-21 05:18:31.136992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:14.482 [2024-11-21 05:18:31.137000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:14.482 [2024-11-21 05:18:31.137008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:14.482 [2024-11-21 05:18:31.137016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:14.482 [2024-11-21 05:18:31.137030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:14.482 [2024-11-21 05:18:31.137038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:14.482 [2024-11-21 05:18:31.137045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:14.482 [2024-11-21 05:18:31.137053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:14.482 [2024-11-21 05:18:31.137061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:14.482 [2024-11-21 05:18:31.137069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.482 [2024-11-21 05:18:31.137077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:14.482 [2024-11-21 05:18:31.137084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:14.482 [2024-11-21 05:18:31.137091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.482 [2024-11-21 05:18:31.137100] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:14.482 [2024-11-21 05:18:31.137108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:14.482 [2024-11-21 05:18:31.137116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:14.482 [2024-11-21 05:18:31.137128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:14.482 [2024-11-21 05:18:31.137139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:14.482 [2024-11-21 05:18:31.137146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:14.482 [2024-11-21 05:18:31.137153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:14.482 [2024-11-21 05:18:31.137163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:14.482 [2024-11-21 05:18:31.137172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:14.482 [2024-11-21 05:18:31.137179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:14.482 [2024-11-21 05:18:31.137188] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:14.482 [2024-11-21 05:18:31.137198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.482 [2024-11-21 05:18:31.137207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:14.482 [2024-11-21 05:18:31.137215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:14.482 [2024-11-21 05:18:31.137222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:14.482 [2024-11-21 05:18:31.137230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:14.482 [2024-11-21 05:18:31.137237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:14.482 [2024-11-21 05:18:31.137286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:14.482 [2024-11-21 05:18:31.137301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:14.482 [2024-11-21 05:18:31.137313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:14.482 [2024-11-21 05:18:31.137325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:14.482 [2024-11-21 05:18:31.137336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:14.482 [2024-11-21 05:18:31.137348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:14.482 [2024-11-21 05:18:31.137365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:14.482 [2024-11-21 05:18:31.137378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:14.482 [2024-11-21 05:18:31.137392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:14.482 [2024-11-21 05:18:31.137403] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:14.482 [2024-11-21 05:18:31.137417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:14.482 [2024-11-21 05:18:31.137432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:14.482 [2024-11-21 05:18:31.137445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:14.482 [2024-11-21 05:18:31.137457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:14.482 [2024-11-21 05:18:31.137469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:14.482 [2024-11-21 05:18:31.137482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.482 [2024-11-21 05:18:31.137492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:14.482 [2024-11-21 05:18:31.137501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:32:14.482 [2024-11-21 05:18:31.137513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.482 [2024-11-21 05:18:31.151915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.482 [2024-11-21 05:18:31.151968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:14.482 [2024-11-21 05:18:31.151981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.352 ms 00:32:14.482 [2024-11-21 05:18:31.151989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.482 [2024-11-21 05:18:31.152087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.482 [2024-11-21 05:18:31.152097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:14.482 [2024-11-21 05:18:31.152106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:14.482 [2024-11-21 05:18:31.152118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.482 [2024-11-21 05:18:31.179156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.482 [2024-11-21 05:18:31.179255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:14.482 [2024-11-21 05:18:31.179279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.970 ms 00:32:14.482 [2024-11-21 05:18:31.179296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.482 [2024-11-21 05:18:31.179372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.482 [2024-11-21 05:18:31.179392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:14.482 [2024-11-21 05:18:31.179410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:14.482 [2024-11-21 05:18:31.179424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.482 [2024-11-21 05:18:31.179603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.179653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:14.483 [2024-11-21 05:18:31.179684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:32:14.483 [2024-11-21 05:18:31.179697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.179924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.179968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:14.483 [2024-11-21 05:18:31.179991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:32:14.483 [2024-11-21 05:18:31.180005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.192197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.192252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:14.483 [2024-11-21 05:18:31.192275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.159 ms 00:32:14.483 [2024-11-21 05:18:31.192288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.192452] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:14.483 [2024-11-21 05:18:31.192467] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:14.483 [2024-11-21 05:18:31.192478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.192489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:14.483 [2024-11-21 05:18:31.192500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:14.483 [2024-11-21 05:18:31.192509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.204877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.204933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:14.483 [2024-11-21 05:18:31.204952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.345 ms 00:32:14.483 [2024-11-21 05:18:31.204960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.205106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.205117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:14.483 [2024-11-21 05:18:31.205131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:32:14.483 [2024-11-21 05:18:31.205139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.205199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.205209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:14.483 [2024-11-21 05:18:31.205222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:14.483 [2024-11-21 05:18:31.205231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.205707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.205744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:14.483 [2024-11-21 05:18:31.205755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:32:14.483 [2024-11-21 05:18:31.205763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.483 [2024-11-21 05:18:31.205786] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:14.483 [2024-11-21 05:18:31.205805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.483 [2024-11-21 05:18:31.205814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:14.483 [2024-11-21 05:18:31.205826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:14.483 [2024-11-21 05:18:31.205834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.216936] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:14.745 [2024-11-21 05:18:31.217108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-11-21 05:18:31.217119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:14.745 [2024-11-21 05:18:31.217139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.253 ms 00:32:14.745 [2024-11-21 05:18:31.217151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.219796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-11-21 05:18:31.219838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:14.745 [2024-11-21 05:18:31.219852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:32:14.745 [2024-11-21 05:18:31.219860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.219985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-11-21 05:18:31.219996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:14.745 [2024-11-21 05:18:31.220006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:14.745 [2024-11-21 05:18:31.220013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.220044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-11-21 05:18:31.220053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:14.745 [2024-11-21 05:18:31.220061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:14.745 [2024-11-21 05:18:31.220072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.220112] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:14.745 [2024-11-21 05:18:31.220122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-11-21 05:18:31.220129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:14.745 [2024-11-21 05:18:31.220142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:14.745 [2024-11-21 05:18:31.220149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.227799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-11-21 05:18:31.227865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:14.745 [2024-11-21 05:18:31.227878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:32:14.745 [2024-11-21 05:18:31.227891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.228005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.745 [2024-11-21 05:18:31.228017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:14.745 [2024-11-21 05:18:31.228030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:14.745 [2024-11-21 05:18:31.228038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.745 [2024-11-21 05:18:31.229476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.655 ms, result 0 00:32:16.134  [2024-11-21T05:18:33.441Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-21T05:18:34.830Z] Copying: 31/1024 [MB] (17 MBps) [2024-11-21T05:18:35.776Z] Copying: 50/1024 [MB] (19 MBps) [2024-11-21T05:18:36.721Z] Copying: 62/1024 [MB] (11 MBps) [2024-11-21T05:18:37.665Z] Copying: 72/1024 [MB] (10 MBps) [2024-11-21T05:18:38.610Z] Copying: 83/1024 [MB] (10 MBps) [2024-11-21T05:18:39.555Z] Copying: 94/1024 [MB] (11 MBps) [2024-11-21T05:18:40.519Z] Copying: 105/1024 [MB] (10 MBps) [2024-11-21T05:18:41.513Z] Copying: 120/1024 [MB] (15 MBps) [2024-11-21T05:18:42.458Z] Copying: 143/1024 [MB] (22 MBps) [2024-11-21T05:18:43.845Z] Copying: 165/1024 [MB] (21 MBps) [2024-11-21T05:18:44.791Z] Copying: 178/1024 [MB] (13 MBps) [2024-11-21T05:18:45.736Z] Copying: 196/1024 [MB] (17 MBps) [2024-11-21T05:18:46.680Z] Copying: 212/1024 [MB] (16 MBps) [2024-11-21T05:18:47.625Z] Copying: 233/1024 [MB] (21 MBps) [2024-11-21T05:18:48.570Z] Copying: 250/1024 [MB] (16 MBps) [2024-11-21T05:18:49.514Z] Copying: 262/1024 [MB] (11 MBps) [2024-11-21T05:18:50.459Z] Copying: 279/1024 [MB] (17 MBps) [2024-11-21T05:18:51.848Z] Copying: 295/1024 [MB] (16 MBps) [2024-11-21T05:18:52.797Z] Copying: 313/1024 [MB] (17 MBps) [2024-11-21T05:18:53.741Z] Copying: 329/1024 [MB] (15 MBps) [2024-11-21T05:18:54.684Z] Copying: 346/1024 [MB] (17 MBps) [2024-11-21T05:18:55.628Z] Copying: 364/1024 [MB] (17 MBps) [2024-11-21T05:18:56.574Z] Copying: 377/1024 [MB] (13 MBps) [2024-11-21T05:18:57.517Z] Copying: 395/1024 [MB] (17 MBps) [2024-11-21T05:18:58.462Z] Copying: 405/1024 [MB] (10 MBps) [2024-11-21T05:18:59.849Z] Copying: 416/1024 [MB] (10 MBps) [2024-11-21T05:19:00.794Z] Copying: 427/1024 [MB] (10 MBps) [2024-11-21T05:19:01.739Z] Copying: 440/1024 [MB] (13 MBps) [2024-11-21T05:19:02.683Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-21T05:19:03.628Z] Copying: 461/1024 [MB] (10 MBps) [2024-11-21T05:19:04.572Z] Copying: 472/1024 [MB] (10 MBps) [2024-11-21T05:19:05.517Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-21T05:19:06.459Z] Copying: 493/1024 [MB] (10 MBps) [2024-11-21T05:19:07.845Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-21T05:19:08.789Z] Copying: 523/1024 [MB] (20 MBps) [2024-11-21T05:19:09.760Z] Copying: 542/1024 [MB] (18 MBps) [2024-11-21T05:19:10.704Z] Copying: 561/1024 [MB] (19 MBps) [2024-11-21T05:19:11.648Z] Copying: 576/1024 [MB] (14 MBps) [2024-11-21T05:19:12.658Z] Copying: 600/1024 [MB] (23 MBps) [2024-11-21T05:19:13.658Z] Copying: 611/1024 [MB] (11 MBps) [2024-11-21T05:19:14.601Z] Copying: 630/1024 [MB] (19 MBps) [2024-11-21T05:19:15.543Z] Copying: 652/1024 [MB] (21 MBps) [2024-11-21T05:19:16.483Z] Copying: 671/1024 [MB] (19 MBps) [2024-11-21T05:19:17.870Z] Copying: 686/1024 [MB] (15 MBps) [2024-11-21T05:19:18.443Z] Copying: 711/1024 [MB] (25 MBps) [2024-11-21T05:19:19.826Z] Copying: 728/1024 [MB] (16 MBps) [2024-11-21T05:19:20.772Z] Copying: 745/1024 [MB] (16 MBps) [2024-11-21T05:19:21.717Z] Copying: 758/1024 [MB] (13 MBps) [2024-11-21T05:19:22.664Z] Copying: 773/1024 [MB] (14 MBps) [2024-11-21T05:19:23.606Z] Copying: 789/1024 [MB] (15 MBps) [2024-11-21T05:19:24.551Z] Copying: 808/1024 [MB] (19 MBps) [2024-11-21T05:19:25.496Z] Copying: 821/1024 [MB] (13 MBps) [2024-11-21T05:19:26.440Z] Copying: 832/1024 [MB] (10 MBps) [2024-11-21T05:19:27.829Z] Copying: 846/1024 [MB] (13 MBps) [2024-11-21T05:19:28.771Z] Copying: 856/1024 [MB] (10 MBps) [2024-11-21T05:19:29.713Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-21T05:19:30.652Z] Copying: 877/1024 [MB] (10 MBps) [2024-11-21T05:19:31.597Z] Copying: 892/1024 [MB] (14 MBps) [2024-11-21T05:19:32.543Z] Copying: 907/1024 [MB] (15 MBps) [2024-11-21T05:19:33.488Z] Copying: 923/1024 [MB] (15 MBps) [2024-11-21T05:19:34.434Z] Copying: 938/1024 [MB] (15 MBps) [2024-11-21T05:19:35.822Z] Copying: 960/1024 [MB] (22 MBps) [2024-11-21T05:19:36.767Z] Copying: 978/1024 [MB] (17 MBps) [2024-11-21T05:19:37.711Z] Copying: 1002/1024 [MB] (23 MBps) [2024-11-21T05:19:37.711Z] Copying: 1019/1024 [MB] (17 MBps) [2024-11-21T05:19:38.285Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-21 05:19:38.013093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.551 [2024-11-21 05:19:38.013190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:21.551 [2024-11-21 05:19:38.013209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:21.551 [2024-11-21 05:19:38.013220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.551 [2024-11-21 05:19:38.013249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:21.551 [2024-11-21 05:19:38.014322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.551 [2024-11-21 05:19:38.014368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:21.551 [2024-11-21 05:19:38.014391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:33:21.551 [2024-11-21 05:19:38.014434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.551 [2024-11-21 05:19:38.014841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.551 [2024-11-21 05:19:38.014872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:21.551 [2024-11-21 05:19:38.014889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:33:21.551 [2024-11-21 05:19:38.014899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.551 [2024-11-21 05:19:38.014937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.551 [2024-11-21 05:19:38.014958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:21.551 [2024-11-21 05:19:38.014968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:21.551 [2024-11-21 05:19:38.014977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.551 [2024-11-21 05:19:38.015054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.551 [2024-11-21 05:19:38.015065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:21.551 [2024-11-21 05:19:38.015079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:21.551 [2024-11-21 05:19:38.015093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.551 [2024-11-21 05:19:38.015109] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:21.551 [2024-11-21 05:19:38.015124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:21.551 [2024-11-21 05:19:38.015350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.015998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.016006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.016014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.016021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.016029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:21.552 [2024-11-21 05:19:38.016046] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:21.552 [2024-11-21 05:19:38.016059] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7e9664a-2cab-4c59-99ec-1785228b7f52 00:33:21.552 [2024-11-21 05:19:38.016067] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:21.552 [2024-11-21 05:19:38.016075] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:21.552 [2024-11-21 05:19:38.016083] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:21.552 [2024-11-21 05:19:38.016092] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:21.552 [2024-11-21 05:19:38.016105] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:21.552 [2024-11-21 05:19:38.016113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:21.552 [2024-11-21 05:19:38.016121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:21.552 [2024-11-21 05:19:38.016130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:21.552 [2024-11-21 05:19:38.016137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:21.552 [2024-11-21 05:19:38.016146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.552 [2024-11-21 05:19:38.016155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:21.552 [2024-11-21 05:19:38.016163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:33:21.552 [2024-11-21 05:19:38.016171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.552 [2024-11-21 05:19:38.019685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.552 [2024-11-21 05:19:38.019724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:21.552 [2024-11-21 05:19:38.019737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.493 ms 00:33:21.552 [2024-11-21 05:19:38.019746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.552 [2024-11-21 05:19:38.019915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:21.552 [2024-11-21 05:19:38.019926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:21.553 [2024-11-21 05:19:38.019936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:33:21.553 [2024-11-21 05:19:38.019949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.032159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.032209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:21.553 [2024-11-21 05:19:38.032221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.032231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.032317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.032327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:21.553 [2024-11-21 05:19:38.032336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.032351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.032443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.032461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:21.553 [2024-11-21 05:19:38.032471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.032484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.032503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.032512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:21.553 [2024-11-21 05:19:38.032521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.032530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.055772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.055830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:21.553 [2024-11-21 05:19:38.055846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.055856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.074989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.075051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:21.553 [2024-11-21 05:19:38.075065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.075085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.075155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.075166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:21.553 [2024-11-21 05:19:38.075176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.075186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.075228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.075239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:21.553 [2024-11-21 05:19:38.075248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.075256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.075322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.075344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:21.553 [2024-11-21 05:19:38.075354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.075362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.075393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.075404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:21.553 [2024-11-21 05:19:38.075412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.075421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.075480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.075491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:21.553 [2024-11-21 05:19:38.075500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.075509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.075566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:21.553 [2024-11-21 05:19:38.075588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:21.553 [2024-11-21 05:19:38.075598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:21.553 [2024-11-21 05:19:38.075626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:21.553 [2024-11-21 05:19:38.075802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 62.657 ms, result 0 00:33:21.814 00:33:21.814 00:33:21.814 05:19:38 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:24.360 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:24.360 05:19:40 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:24.360 [2024-11-21 05:19:40.635034] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:33:24.360 [2024-11-21 05:19:40.635160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96384 ] 00:33:24.360 [2024-11-21 05:19:40.794344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.360 [2024-11-21 05:19:40.830211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.360 [2024-11-21 05:19:40.980260] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:24.360 [2024-11-21 05:19:40.980356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:24.624 [2024-11-21 05:19:41.143796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.144028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:24.624 [2024-11-21 05:19:41.144056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:24.624 [2024-11-21 05:19:41.144067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.144146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.144167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:24.624 [2024-11-21 05:19:41.144177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:24.624 [2024-11-21 05:19:41.144185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.144214] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:24.624 [2024-11-21 05:19:41.144498] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:24.624 [2024-11-21 05:19:41.144520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.144534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:24.624 [2024-11-21 05:19:41.144546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:33:24.624 [2024-11-21 05:19:41.144560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.144932] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:24.624 [2024-11-21 05:19:41.144964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.144975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:24.624 [2024-11-21 05:19:41.144987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:24.624 [2024-11-21 05:19:41.145001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.145069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.145084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:24.624 [2024-11-21 05:19:41.145093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:24.624 [2024-11-21 05:19:41.145104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.145381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.145402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:24.624 [2024-11-21 05:19:41.145413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:33:24.624 [2024-11-21 05:19:41.145421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.145519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.145529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:24.624 [2024-11-21 05:19:41.145538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:33:24.624 [2024-11-21 05:19:41.145547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.145571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.145584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:24.624 [2024-11-21 05:19:41.145598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:24.624 [2024-11-21 05:19:41.145623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.145648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:24.624 [2024-11-21 05:19:41.148454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.148495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:24.624 [2024-11-21 05:19:41.148508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:33:24.624 [2024-11-21 05:19:41.148522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.148561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.148571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:24.624 [2024-11-21 05:19:41.148581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:24.624 [2024-11-21 05:19:41.148589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.148659] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:24.624 [2024-11-21 05:19:41.148688] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:24.624 [2024-11-21 05:19:41.148731] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:24.624 [2024-11-21 05:19:41.148754] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:24.624 [2024-11-21 05:19:41.148866] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:24.624 [2024-11-21 05:19:41.148878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:24.624 [2024-11-21 05:19:41.148890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:24.624 [2024-11-21 05:19:41.148903] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:24.624 [2024-11-21 05:19:41.148917] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:24.624 [2024-11-21 05:19:41.148930] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:24.624 [2024-11-21 05:19:41.148940] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:24.624 [2024-11-21 05:19:41.148950] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:24.624 [2024-11-21 05:19:41.148959] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:24.624 [2024-11-21 05:19:41.148967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.148976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:24.624 [2024-11-21 05:19:41.148985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:33:24.624 [2024-11-21 05:19:41.148993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.149085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.624 [2024-11-21 05:19:41.149095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:24.624 [2024-11-21 05:19:41.149111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:24.624 [2024-11-21 05:19:41.149124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.624 [2024-11-21 05:19:41.149232] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:24.624 [2024-11-21 05:19:41.149243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:24.624 [2024-11-21 05:19:41.149284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:24.624 [2024-11-21 05:19:41.149295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:24.624 [2024-11-21 05:19:41.149312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:24.624 [2024-11-21 05:19:41.149338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:24.624 [2024-11-21 05:19:41.149346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:24.624 [2024-11-21 05:19:41.149363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:24.624 [2024-11-21 05:19:41.149370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:24.624 [2024-11-21 05:19:41.149378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:24.624 [2024-11-21 05:19:41.149386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:24.624 [2024-11-21 05:19:41.149395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:24.624 [2024-11-21 05:19:41.149403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:24.624 [2024-11-21 05:19:41.149422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:24.624 [2024-11-21 05:19:41.149435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:24.624 [2024-11-21 05:19:41.149452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.624 [2024-11-21 05:19:41.149468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:24.624 [2024-11-21 05:19:41.149475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.624 [2024-11-21 05:19:41.149490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:24.624 [2024-11-21 05:19:41.149498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:24.624 [2024-11-21 05:19:41.149506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.625 [2024-11-21 05:19:41.149514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:24.625 [2024-11-21 05:19:41.149521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:24.625 [2024-11-21 05:19:41.149529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:24.625 [2024-11-21 05:19:41.149537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:24.625 [2024-11-21 05:19:41.149544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:24.625 [2024-11-21 05:19:41.149552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:24.625 [2024-11-21 05:19:41.149566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:24.625 [2024-11-21 05:19:41.149574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:24.625 [2024-11-21 05:19:41.149581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:24.625 [2024-11-21 05:19:41.149589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:24.625 [2024-11-21 05:19:41.149597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:24.625 [2024-11-21 05:19:41.149621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.625 [2024-11-21 05:19:41.149630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:24.625 [2024-11-21 05:19:41.149637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:24.625 [2024-11-21 05:19:41.149646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.625 [2024-11-21 05:19:41.149654] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:24.625 [2024-11-21 05:19:41.149663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:24.625 [2024-11-21 05:19:41.149671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:24.625 [2024-11-21 05:19:41.149684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:24.625 [2024-11-21 05:19:41.149699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:24.625 [2024-11-21 05:19:41.149709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:24.625 [2024-11-21 05:19:41.149717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:24.625 [2024-11-21 05:19:41.149727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:24.625 [2024-11-21 05:19:41.149734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:24.625 [2024-11-21 05:19:41.149742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:24.625 [2024-11-21 05:19:41.149751] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:24.625 [2024-11-21 05:19:41.149761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:24.625 [2024-11-21 05:19:41.149770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:24.625 [2024-11-21 05:19:41.149778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:24.625 [2024-11-21 05:19:41.149786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:24.625 [2024-11-21 05:19:41.149792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:24.625 [2024-11-21 05:19:41.149799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:24.625 [2024-11-21 05:19:41.149807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:24.625 [2024-11-21 05:19:41.149815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:24.625 [2024-11-21 05:19:41.149823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:24.625 [2024-11-21 05:19:41.149830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:24.625 [2024-11-21 05:19:41.149836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:24.625 [2024-11-21 05:19:41.149843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:24.625 [2024-11-21 05:19:41.149853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:24.625 [2024-11-21 05:19:41.149860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:24.625 [2024-11-21 05:19:41.149867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:24.625 [2024-11-21 05:19:41.149874] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:24.625 [2024-11-21 05:19:41.149884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:24.625 [2024-11-21 05:19:41.149894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:24.625 [2024-11-21 05:19:41.149901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:24.625 [2024-11-21 05:19:41.149908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:24.625 [2024-11-21 05:19:41.149915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:24.625 [2024-11-21 05:19:41.149923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.149931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:24.625 [2024-11-21 05:19:41.149939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:33:24.625 [2024-11-21 05:19:41.149946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.164286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.164465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:24.625 [2024-11-21 05:19:41.164534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.295 ms 00:33:24.625 [2024-11-21 05:19:41.164561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.164677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.164702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:24.625 [2024-11-21 05:19:41.164722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:33:24.625 [2024-11-21 05:19:41.164741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.188259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.188497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:24.625 [2024-11-21 05:19:41.188600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.429 ms 00:33:24.625 [2024-11-21 05:19:41.188660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.188738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.188770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:24.625 [2024-11-21 05:19:41.188797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:24.625 [2024-11-21 05:19:41.188831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.188988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.189034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:24.625 [2024-11-21 05:19:41.189269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:33:24.625 [2024-11-21 05:19:41.189332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.189531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.189656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:24.625 [2024-11-21 05:19:41.189730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:33:24.625 [2024-11-21 05:19:41.189760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.201756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.201933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:24.625 [2024-11-21 05:19:41.201999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.536 ms 00:33:24.625 [2024-11-21 05:19:41.202036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.202225] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:24.625 [2024-11-21 05:19:41.202330] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:24.625 [2024-11-21 05:19:41.202367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.202388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:24.625 [2024-11-21 05:19:41.202435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:33:24.625 [2024-11-21 05:19:41.202465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.214956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.215134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:24.625 [2024-11-21 05:19:41.215197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.443 ms 00:33:24.625 [2024-11-21 05:19:41.215219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.215379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.215403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:24.625 [2024-11-21 05:19:41.215424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:33:24.625 [2024-11-21 05:19:41.215443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.215522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.215688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:24.625 [2024-11-21 05:19:41.215721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:24.625 [2024-11-21 05:19:41.215741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.625 [2024-11-21 05:19:41.216096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.625 [2024-11-21 05:19:41.216138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:24.625 [2024-11-21 05:19:41.216162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:33:24.625 [2024-11-21 05:19:41.216368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.216476] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:24.626 [2024-11-21 05:19:41.216554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.216931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:24.626 [2024-11-21 05:19:41.217050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:33:24.626 [2024-11-21 05:19:41.217395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.228316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:24.626 [2024-11-21 05:19:41.228598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.228655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:24.626 [2024-11-21 05:19:41.228732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.137 ms 00:33:24.626 [2024-11-21 05:19:41.228755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.231296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.231437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:24.626 [2024-11-21 05:19:41.231501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:33:24.626 [2024-11-21 05:19:41.231525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.231669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.231701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:24.626 [2024-11-21 05:19:41.231776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:24.626 [2024-11-21 05:19:41.231807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.231855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.231877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:24.626 [2024-11-21 05:19:41.231897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:24.626 [2024-11-21 05:19:41.231958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.232023] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:24.626 [2024-11-21 05:19:41.232054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.232072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:24.626 [2024-11-21 05:19:41.232094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:24.626 [2024-11-21 05:19:41.232112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.239539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.239733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:24.626 [2024-11-21 05:19:41.239795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.394 ms 00:33:24.626 [2024-11-21 05:19:41.239818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.240038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:24.626 [2024-11-21 05:19:41.240207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:24.626 [2024-11-21 05:19:41.240241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:24.626 [2024-11-21 05:19:41.240263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:24.626 [2024-11-21 05:19:41.241871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.536 ms, result 0 00:33:25.571  [2024-11-21T05:19:43.696Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-21T05:19:44.269Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-21T05:19:45.269Z] Copying: 30/1024 [MB] (10 MBps) [2024-11-21T05:19:46.656Z] Copying: 44/1024 [MB] (14 MBps) [2024-11-21T05:19:47.597Z] Copying: 57/1024 [MB] (12 MBps) [2024-11-21T05:19:48.537Z] Copying: 82/1024 [MB] (25 MBps) [2024-11-21T05:19:49.478Z] Copying: 94588/1048576 [kB] (9992 kBps) [2024-11-21T05:19:50.417Z] Copying: 108/1024 [MB] (16 MBps) [2024-11-21T05:19:51.357Z] Copying: 120/1024 [MB] (11 MBps) [2024-11-21T05:19:52.299Z] Copying: 150/1024 [MB] (30 MBps) [2024-11-21T05:19:53.688Z] Copying: 172/1024 [MB] (21 MBps) [2024-11-21T05:19:54.260Z] Copying: 188/1024 [MB] (16 MBps) [2024-11-21T05:19:55.645Z] Copying: 208/1024 [MB] (19 MBps) [2024-11-21T05:19:56.586Z] Copying: 223/1024 [MB] (14 MBps) [2024-11-21T05:19:57.529Z] Copying: 244/1024 [MB] (20 MBps) [2024-11-21T05:19:58.471Z] Copying: 265/1024 [MB] (21 MBps) [2024-11-21T05:19:59.414Z] Copying: 283/1024 [MB] (18 MBps) [2024-11-21T05:20:00.358Z] Copying: 299/1024 [MB] (15 MBps) [2024-11-21T05:20:01.290Z] Copying: 312/1024 [MB] (13 MBps) [2024-11-21T05:20:02.670Z] Copying: 364/1024 [MB] (51 MBps) [2024-11-21T05:20:03.613Z] Copying: 403/1024 [MB] (39 MBps) [2024-11-21T05:20:04.559Z] Copying: 425/1024 [MB] (21 MBps) [2024-11-21T05:20:05.505Z] Copying: 438/1024 [MB] (13 MBps) [2024-11-21T05:20:06.450Z] Copying: 453/1024 [MB] (14 MBps) [2024-11-21T05:20:07.389Z] Copying: 466/1024 [MB] (12 MBps) [2024-11-21T05:20:08.326Z] Copying: 478/1024 [MB] (12 MBps) [2024-11-21T05:20:09.269Z] Copying: 505/1024 [MB] (26 MBps) [2024-11-21T05:20:10.657Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-21T05:20:11.601Z] Copying: 532/1024 [MB] (16 MBps) [2024-11-21T05:20:12.547Z] Copying: 552/1024 [MB] (19 MBps) [2024-11-21T05:20:13.490Z] Copying: 572/1024 [MB] (19 MBps) [2024-11-21T05:20:14.432Z] Copying: 588/1024 [MB] (16 MBps) [2024-11-21T05:20:15.370Z] Copying: 607/1024 [MB] (18 MBps) [2024-11-21T05:20:16.314Z] Copying: 624/1024 [MB] (17 MBps) [2024-11-21T05:20:17.324Z] Copying: 638/1024 [MB] (14 MBps) [2024-11-21T05:20:18.268Z] Copying: 653/1024 [MB] (14 MBps) [2024-11-21T05:20:19.654Z] Copying: 665/1024 [MB] (12 MBps) [2024-11-21T05:20:20.596Z] Copying: 678/1024 [MB] (12 MBps) [2024-11-21T05:20:21.530Z] Copying: 692/1024 [MB] (13 MBps) [2024-11-21T05:20:22.464Z] Copying: 731/1024 [MB] (39 MBps) [2024-11-21T05:20:23.399Z] Copying: 769/1024 [MB] (37 MBps) [2024-11-21T05:20:24.340Z] Copying: 801/1024 [MB] (32 MBps) [2024-11-21T05:20:25.275Z] Copying: 816/1024 [MB] (15 MBps) [2024-11-21T05:20:26.648Z] Copying: 836/1024 [MB] (20 MBps) [2024-11-21T05:20:27.589Z] Copying: 883/1024 [MB] (47 MBps) [2024-11-21T05:20:28.533Z] Copying: 926/1024 [MB] (42 MBps) [2024-11-21T05:20:29.477Z] Copying: 940/1024 [MB] (14 MBps) [2024-11-21T05:20:30.421Z] Copying: 955/1024 [MB] (14 MBps) [2024-11-21T05:20:31.367Z] Copying: 968/1024 [MB] (13 MBps) [2024-11-21T05:20:32.312Z] Copying: 982/1024 [MB] (13 MBps) [2024-11-21T05:20:33.256Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-21T05:20:34.634Z] Copying: 1008/1024 [MB] (11 MBps) [2024-11-21T05:20:35.207Z] Copying: 1023/1024 [MB] (14 MBps) [2024-11-21T05:20:35.207Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-21 05:20:35.016047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.473 [2024-11-21 05:20:35.016146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:18.473 [2024-11-21 05:20:35.016169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:18.473 [2024-11-21 05:20:35.016180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.473 [2024-11-21 05:20:35.019116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:18.473 [2024-11-21 05:20:35.021514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.473 [2024-11-21 05:20:35.021558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:18.473 [2024-11-21 05:20:35.021574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.344 ms 00:34:18.473 [2024-11-21 05:20:35.021586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.473 [2024-11-21 05:20:35.032579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.473 [2024-11-21 05:20:35.032654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:18.473 [2024-11-21 05:20:35.032667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.758 ms 00:34:18.473 [2024-11-21 05:20:35.032676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.473 [2024-11-21 05:20:35.032713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.473 [2024-11-21 05:20:35.032723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:18.473 [2024-11-21 05:20:35.032733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:18.473 [2024-11-21 05:20:35.032742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.473 [2024-11-21 05:20:35.032813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.473 [2024-11-21 05:20:35.032826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:18.473 [2024-11-21 05:20:35.032839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:34:18.473 [2024-11-21 05:20:35.032848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.473 [2024-11-21 05:20:35.032863] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:18.473 [2024-11-21 05:20:35.032877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:34:18.473 [2024-11-21 05:20:35.032888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.032998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:18.473 [2024-11-21 05:20:35.033076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:18.474 [2024-11-21 05:20:35.033759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:18.475 [2024-11-21 05:20:35.033767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:18.475 [2024-11-21 05:20:35.033775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:18.475 [2024-11-21 05:20:35.033783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:18.475 [2024-11-21 05:20:35.033792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:18.475 [2024-11-21 05:20:35.033810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:18.475 [2024-11-21 05:20:35.033825] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7e9664a-2cab-4c59-99ec-1785228b7f52 00:34:18.475 [2024-11-21 05:20:35.033835] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:34:18.475 [2024-11-21 05:20:35.033843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:34:18.475 [2024-11-21 05:20:35.033853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:34:18.475 [2024-11-21 05:20:35.033862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:34:18.475 [2024-11-21 05:20:35.033870] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:18.475 [2024-11-21 05:20:35.033882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:18.475 [2024-11-21 05:20:35.033909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:18.475 [2024-11-21 05:20:35.033917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:18.475 [2024-11-21 05:20:35.033924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:18.475 [2024-11-21 05:20:35.033933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.475 [2024-11-21 05:20:35.033942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:18.475 [2024-11-21 05:20:35.033951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:34:18.475 [2024-11-21 05:20:35.033958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.037252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.475 [2024-11-21 05:20:35.037329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:18.475 [2024-11-21 05:20:35.037343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.276 ms 00:34:18.475 [2024-11-21 05:20:35.037357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.037533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:18.475 [2024-11-21 05:20:35.037544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:18.475 [2024-11-21 05:20:35.037555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:34:18.475 [2024-11-21 05:20:35.037564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.048015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.048074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:18.475 [2024-11-21 05:20:35.048086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.048095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.048170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.048180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:18.475 [2024-11-21 05:20:35.048190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.048198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.048258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.048271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:18.475 [2024-11-21 05:20:35.048290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.048303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.048323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.048333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:18.475 [2024-11-21 05:20:35.048341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.048350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.066685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.066946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:18.475 [2024-11-21 05:20:35.066973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.066984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.082062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.082262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:18.475 [2024-11-21 05:20:35.082284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.082295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.082408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.082420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:18.475 [2024-11-21 05:20:35.082430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.082440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.082484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.082497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:18.475 [2024-11-21 05:20:35.082506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.082514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.082583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.082622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:18.475 [2024-11-21 05:20:35.082637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.082646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.082678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.082689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:18.475 [2024-11-21 05:20:35.082703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.082711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.082763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.082773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:18.475 [2024-11-21 05:20:35.082783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.082792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.082855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:18.475 [2024-11-21 05:20:35.082868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:18.475 [2024-11-21 05:20:35.082878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:18.475 [2024-11-21 05:20:35.082888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:18.475 [2024-11-21 05:20:35.083053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 69.278 ms, result 0 00:34:19.416 00:34:19.416 00:34:19.416 05:20:35 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:19.416 [2024-11-21 05:20:35.974860] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:34:19.416 [2024-11-21 05:20:35.975027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96937 ] 00:34:19.416 [2024-11-21 05:20:36.140495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.678 [2024-11-21 05:20:36.182001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.678 [2024-11-21 05:20:36.333713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:19.678 [2024-11-21 05:20:36.333806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:19.940 [2024-11-21 05:20:36.498241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.940 [2024-11-21 05:20:36.498317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:19.940 [2024-11-21 05:20:36.498334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:19.940 [2024-11-21 05:20:36.498343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.940 [2024-11-21 05:20:36.498412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.940 [2024-11-21 05:20:36.498424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:19.940 [2024-11-21 05:20:36.498434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:19.940 [2024-11-21 05:20:36.498443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.940 [2024-11-21 05:20:36.498470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:19.940 [2024-11-21 05:20:36.498807] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:19.940 [2024-11-21 05:20:36.498837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.940 [2024-11-21 05:20:36.498847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:19.940 [2024-11-21 05:20:36.498857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:34:19.940 [2024-11-21 05:20:36.498870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.940 [2024-11-21 05:20:36.499545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:19.940 [2024-11-21 05:20:36.499636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.940 [2024-11-21 05:20:36.499649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:19.940 [2024-11-21 05:20:36.499663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:34:19.940 [2024-11-21 05:20:36.499672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.499813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.499831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:19.941 [2024-11-21 05:20:36.499842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:34:19.941 [2024-11-21 05:20:36.499850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.500147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.500161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:19.941 [2024-11-21 05:20:36.500174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:34:19.941 [2024-11-21 05:20:36.500190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.500289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.500301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:19.941 [2024-11-21 05:20:36.500311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:34:19.941 [2024-11-21 05:20:36.500324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.500349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.500359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:19.941 [2024-11-21 05:20:36.500372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:19.941 [2024-11-21 05:20:36.500382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.500407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:19.941 [2024-11-21 05:20:36.503303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.503360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:19.941 [2024-11-21 05:20:36.503372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.903 ms 00:34:19.941 [2024-11-21 05:20:36.503382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.503421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.503439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:19.941 [2024-11-21 05:20:36.503449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:34:19.941 [2024-11-21 05:20:36.503458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.503520] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:19.941 [2024-11-21 05:20:36.503553] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:19.941 [2024-11-21 05:20:36.503597] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:19.941 [2024-11-21 05:20:36.503645] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:19.941 [2024-11-21 05:20:36.503759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:19.941 [2024-11-21 05:20:36.503772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:19.941 [2024-11-21 05:20:36.503783] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:19.941 [2024-11-21 05:20:36.503795] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:19.941 [2024-11-21 05:20:36.503806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:19.941 [2024-11-21 05:20:36.503819] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:19.941 [2024-11-21 05:20:36.503830] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:19.941 [2024-11-21 05:20:36.503840] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:19.941 [2024-11-21 05:20:36.503850] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:19.941 [2024-11-21 05:20:36.503859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.503866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:19.941 [2024-11-21 05:20:36.503874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:34:19.941 [2024-11-21 05:20:36.503884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.503972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.941 [2024-11-21 05:20:36.503982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:19.941 [2024-11-21 05:20:36.503993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:19.941 [2024-11-21 05:20:36.504002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.941 [2024-11-21 05:20:36.504109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:19.941 [2024-11-21 05:20:36.504129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:19.941 [2024-11-21 05:20:36.504138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:19.941 [2024-11-21 05:20:36.504166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:19.941 [2024-11-21 05:20:36.504200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:19.941 [2024-11-21 05:20:36.504217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:19.941 [2024-11-21 05:20:36.504224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:19.941 [2024-11-21 05:20:36.504231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:19.941 [2024-11-21 05:20:36.504238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:19.941 [2024-11-21 05:20:36.504248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:19.941 [2024-11-21 05:20:36.504256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:19.941 [2024-11-21 05:20:36.504270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:19.941 [2024-11-21 05:20:36.504297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:19.941 [2024-11-21 05:20:36.504327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:19.941 [2024-11-21 05:20:36.504348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:19.941 [2024-11-21 05:20:36.504368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:19.941 [2024-11-21 05:20:36.504393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:19.941 [2024-11-21 05:20:36.504406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:19.941 [2024-11-21 05:20:36.504413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:19.941 [2024-11-21 05:20:36.504419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:19.941 [2024-11-21 05:20:36.504428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:19.941 [2024-11-21 05:20:36.504437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:19.941 [2024-11-21 05:20:36.504444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:19.941 [2024-11-21 05:20:36.504459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:19.941 [2024-11-21 05:20:36.504466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504473] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:19.941 [2024-11-21 05:20:36.504481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:19.941 [2024-11-21 05:20:36.504489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:19.941 [2024-11-21 05:20:36.504509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:19.941 [2024-11-21 05:20:36.504515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:19.941 [2024-11-21 05:20:36.504522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:19.941 [2024-11-21 05:20:36.504529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:19.941 [2024-11-21 05:20:36.504535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:19.941 [2024-11-21 05:20:36.504541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:19.941 [2024-11-21 05:20:36.504551] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:19.941 [2024-11-21 05:20:36.504564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:19.941 [2024-11-21 05:20:36.504575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:19.941 [2024-11-21 05:20:36.504583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:19.942 [2024-11-21 05:20:36.504590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:19.942 [2024-11-21 05:20:36.504598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:19.942 [2024-11-21 05:20:36.504604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:19.942 [2024-11-21 05:20:36.504629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:19.942 [2024-11-21 05:20:36.504636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:19.942 [2024-11-21 05:20:36.504644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:19.942 [2024-11-21 05:20:36.504652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:19.942 [2024-11-21 05:20:36.504659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:19.942 [2024-11-21 05:20:36.504669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:19.942 [2024-11-21 05:20:36.504676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:19.942 [2024-11-21 05:20:36.504685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:19.942 [2024-11-21 05:20:36.504694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:19.942 [2024-11-21 05:20:36.504703] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:19.942 [2024-11-21 05:20:36.504717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:19.942 [2024-11-21 05:20:36.504730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:19.942 [2024-11-21 05:20:36.504738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:19.942 [2024-11-21 05:20:36.504747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:19.942 [2024-11-21 05:20:36.504755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:19.942 [2024-11-21 05:20:36.504763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.504771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:19.942 [2024-11-21 05:20:36.504779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:34:19.942 [2024-11-21 05:20:36.504787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.519137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.519186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:19.942 [2024-11-21 05:20:36.519198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.306 ms 00:34:19.942 [2024-11-21 05:20:36.519208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.519306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.519315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:19.942 [2024-11-21 05:20:36.519330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:34:19.942 [2024-11-21 05:20:36.519343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.543200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.543284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:19.942 [2024-11-21 05:20:36.543303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.792 ms 00:34:19.942 [2024-11-21 05:20:36.543323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.543383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.543399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:19.942 [2024-11-21 05:20:36.543411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:19.942 [2024-11-21 05:20:36.543423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.543564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.543581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:19.942 [2024-11-21 05:20:36.543653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:34:19.942 [2024-11-21 05:20:36.543666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.543837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.543856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:19.942 [2024-11-21 05:20:36.543872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:34:19.942 [2024-11-21 05:20:36.543883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.555654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.555700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:19.942 [2024-11-21 05:20:36.555721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.743 ms 00:34:19.942 [2024-11-21 05:20:36.555733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.555889] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:19.942 [2024-11-21 05:20:36.555909] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:19.942 [2024-11-21 05:20:36.555919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.555929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:19.942 [2024-11-21 05:20:36.555938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:34:19.942 [2024-11-21 05:20:36.555951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.568283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.568331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:19.942 [2024-11-21 05:20:36.568354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.308 ms 00:34:19.942 [2024-11-21 05:20:36.568362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.568507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.568522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:19.942 [2024-11-21 05:20:36.568534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:34:19.942 [2024-11-21 05:20:36.568546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.568604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.568647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:19.942 [2024-11-21 05:20:36.568665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:19.942 [2024-11-21 05:20:36.568674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.569020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.569035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:19.942 [2024-11-21 05:20:36.569044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:34:19.942 [2024-11-21 05:20:36.569052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.569073] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:19.942 [2024-11-21 05:20:36.569083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.569095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:19.942 [2024-11-21 05:20:36.569112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:19.942 [2024-11-21 05:20:36.569119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.579943] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:19.942 [2024-11-21 05:20:36.580315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.580334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:19.942 [2024-11-21 05:20:36.580346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.177 ms 00:34:19.942 [2024-11-21 05:20:36.580355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.583140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.583181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:19.942 [2024-11-21 05:20:36.583192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:34:19.942 [2024-11-21 05:20:36.583201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.583294] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:19.942 [2024-11-21 05:20:36.584096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.584193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:19.942 [2024-11-21 05:20:36.584249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:34:19.942 [2024-11-21 05:20:36.584278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.584481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.584957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:19.942 [2024-11-21 05:20:36.584991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:19.942 [2024-11-21 05:20:36.585002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.585104] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:19.942 [2024-11-21 05:20:36.585118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.585127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:19.942 [2024-11-21 05:20:36.585136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:34:19.942 [2024-11-21 05:20:36.585146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.592948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.942 [2024-11-21 05:20:36.593005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:19.942 [2024-11-21 05:20:36.593018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.766 ms 00:34:19.942 [2024-11-21 05:20:36.593037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.942 [2024-11-21 05:20:36.593135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:19.943 [2024-11-21 05:20:36.593146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:19.943 [2024-11-21 05:20:36.593162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:19.943 [2024-11-21 05:20:36.593171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:19.943 [2024-11-21 05:20:36.596602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.383 ms, result 0 00:34:21.330  [2024-11-21T05:20:39.007Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-21T05:20:39.965Z] Copying: 25/1024 [MB] (13 MBps) [2024-11-21T05:20:40.910Z] Copying: 40/1024 [MB] (14 MBps) [2024-11-21T05:20:41.853Z] Copying: 60/1024 [MB] (20 MBps) [2024-11-21T05:20:43.242Z] Copying: 84/1024 [MB] (24 MBps) [2024-11-21T05:20:43.814Z] Copying: 99/1024 [MB] (15 MBps) [2024-11-21T05:20:45.199Z] Copying: 112/1024 [MB] (12 MBps) [2024-11-21T05:20:46.145Z] Copying: 130/1024 [MB] (17 MBps) [2024-11-21T05:20:47.088Z] Copying: 146/1024 [MB] (16 MBps) [2024-11-21T05:20:48.079Z] Copying: 168/1024 [MB] (22 MBps) [2024-11-21T05:20:49.053Z] Copying: 194/1024 [MB] (25 MBps) [2024-11-21T05:20:49.997Z] Copying: 220/1024 [MB] (26 MBps) [2024-11-21T05:20:50.944Z] Copying: 241/1024 [MB] (20 MBps) [2024-11-21T05:20:51.890Z] Copying: 257/1024 [MB] (16 MBps) [2024-11-21T05:20:52.836Z] Copying: 279/1024 [MB] (21 MBps) [2024-11-21T05:20:54.226Z] Copying: 297/1024 [MB] (18 MBps) [2024-11-21T05:20:55.172Z] Copying: 311/1024 [MB] (14 MBps) [2024-11-21T05:20:56.117Z] Copying: 330/1024 [MB] (18 MBps) [2024-11-21T05:20:57.063Z] Copying: 341/1024 [MB] (11 MBps) [2024-11-21T05:20:58.006Z] Copying: 353/1024 [MB] (12 MBps) [2024-11-21T05:20:58.947Z] Copying: 364/1024 [MB] (11 MBps) [2024-11-21T05:20:59.893Z] Copying: 375/1024 [MB] (10 MBps) [2024-11-21T05:21:00.837Z] Copying: 387/1024 [MB] (11 MBps) [2024-11-21T05:21:02.225Z] Copying: 398/1024 [MB] (11 MBps) [2024-11-21T05:21:03.171Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-21T05:21:04.116Z] Copying: 424/1024 [MB] (14 MBps) [2024-11-21T05:21:05.061Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-21T05:21:06.008Z] Copying: 453/1024 [MB] (17 MBps) [2024-11-21T05:21:06.953Z] Copying: 464/1024 [MB] (11 MBps) [2024-11-21T05:21:07.898Z] Copying: 475/1024 [MB] (11 MBps) [2024-11-21T05:21:08.844Z] Copying: 486/1024 [MB] (11 MBps) [2024-11-21T05:21:10.236Z] Copying: 497/1024 [MB] (11 MBps) [2024-11-21T05:21:10.804Z] Copying: 511/1024 [MB] (13 MBps) [2024-11-21T05:21:12.190Z] Copying: 532/1024 [MB] (21 MBps) [2024-11-21T05:21:13.137Z] Copying: 550/1024 [MB] (17 MBps) [2024-11-21T05:21:14.084Z] Copying: 565/1024 [MB] (15 MBps) [2024-11-21T05:21:15.029Z] Copying: 581/1024 [MB] (16 MBps) [2024-11-21T05:21:15.973Z] Copying: 602/1024 [MB] (20 MBps) [2024-11-21T05:21:16.913Z] Copying: 625/1024 [MB] (23 MBps) [2024-11-21T05:21:17.857Z] Copying: 647/1024 [MB] (21 MBps) [2024-11-21T05:21:18.803Z] Copying: 665/1024 [MB] (18 MBps) [2024-11-21T05:21:20.221Z] Copying: 676/1024 [MB] (10 MBps) [2024-11-21T05:21:20.833Z] Copying: 686/1024 [MB] (10 MBps) [2024-11-21T05:21:22.221Z] Copying: 697/1024 [MB] (10 MBps) [2024-11-21T05:21:23.166Z] Copying: 707/1024 [MB] (10 MBps) [2024-11-21T05:21:24.116Z] Copying: 728/1024 [MB] (21 MBps) [2024-11-21T05:21:25.063Z] Copying: 739/1024 [MB] (11 MBps) [2024-11-21T05:21:26.008Z] Copying: 750/1024 [MB] (10 MBps) [2024-11-21T05:21:26.955Z] Copying: 768/1024 [MB] (17 MBps) [2024-11-21T05:21:27.900Z] Copying: 781/1024 [MB] (13 MBps) [2024-11-21T05:21:28.846Z] Copying: 801/1024 [MB] (19 MBps) [2024-11-21T05:21:30.229Z] Copying: 821/1024 [MB] (19 MBps) [2024-11-21T05:21:30.802Z] Copying: 842/1024 [MB] (21 MBps) [2024-11-21T05:21:32.186Z] Copying: 860/1024 [MB] (17 MBps) [2024-11-21T05:21:33.129Z] Copying: 873/1024 [MB] (13 MBps) [2024-11-21T05:21:34.071Z] Copying: 893/1024 [MB] (19 MBps) [2024-11-21T05:21:35.015Z] Copying: 907/1024 [MB] (14 MBps) [2024-11-21T05:21:35.959Z] Copying: 925/1024 [MB] (17 MBps) [2024-11-21T05:21:36.903Z] Copying: 945/1024 [MB] (20 MBps) [2024-11-21T05:21:37.848Z] Copying: 964/1024 [MB] (19 MBps) [2024-11-21T05:21:39.239Z] Copying: 984/1024 [MB] (20 MBps) [2024-11-21T05:21:39.809Z] Copying: 1000/1024 [MB] (15 MBps) [2024-11-21T05:21:40.070Z] Copying: 1020/1024 [MB] (20 MBps) [2024-11-21T05:21:40.646Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-21 05:21:40.385944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.912 [2024-11-21 05:21:40.386015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:23.912 [2024-11-21 05:21:40.386033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:23.912 [2024-11-21 05:21:40.386042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.912 [2024-11-21 05:21:40.386068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:23.912 [2024-11-21 05:21:40.386826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.912 [2024-11-21 05:21:40.386858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:23.912 [2024-11-21 05:21:40.386875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:35:23.912 [2024-11-21 05:21:40.386887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.912 [2024-11-21 05:21:40.387202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.912 [2024-11-21 05:21:40.387225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:23.912 [2024-11-21 05:21:40.387244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:35:23.912 [2024-11-21 05:21:40.387254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.912 [2024-11-21 05:21:40.387288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.912 [2024-11-21 05:21:40.387297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:23.912 [2024-11-21 05:21:40.387311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:23.912 [2024-11-21 05:21:40.387318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.912 [2024-11-21 05:21:40.387382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.912 [2024-11-21 05:21:40.387395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:23.912 [2024-11-21 05:21:40.387408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:35:23.912 [2024-11-21 05:21:40.387416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.912 [2024-11-21 05:21:40.387430] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:23.912 [2024-11-21 05:21:40.387444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:23.912 [2024-11-21 05:21:40.387459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:23.912 [2024-11-21 05:21:40.387893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.387994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:23.913 [2024-11-21 05:21:40.388274] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:23.913 [2024-11-21 05:21:40.388286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7e9664a-2cab-4c59-99ec-1785228b7f52 00:35:23.913 [2024-11-21 05:21:40.388294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:23.913 [2024-11-21 05:21:40.388302] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:35:23.913 [2024-11-21 05:21:40.388310] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:35:23.913 [2024-11-21 05:21:40.388319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:35:23.913 [2024-11-21 05:21:40.388333] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:23.913 [2024-11-21 05:21:40.388342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:23.913 [2024-11-21 05:21:40.388350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:23.913 [2024-11-21 05:21:40.388358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:23.913 [2024-11-21 05:21:40.388365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:23.913 [2024-11-21 05:21:40.388372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.913 [2024-11-21 05:21:40.388380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:23.913 [2024-11-21 05:21:40.388390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:35:23.913 [2024-11-21 05:21:40.388397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.391059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.913 [2024-11-21 05:21:40.391195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:23.913 [2024-11-21 05:21:40.391264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:35:23.913 [2024-11-21 05:21:40.391289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.391422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.913 [2024-11-21 05:21:40.391446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:23.913 [2024-11-21 05:21:40.391527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:35:23.913 [2024-11-21 05:21:40.391550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.400416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.400573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:23.913 [2024-11-21 05:21:40.400757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.913 [2024-11-21 05:21:40.400803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.400889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.400912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:23.913 [2024-11-21 05:21:40.400932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.913 [2024-11-21 05:21:40.400953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.401049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.401085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:23.913 [2024-11-21 05:21:40.401113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.913 [2024-11-21 05:21:40.401178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.401213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.401267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:23.913 [2024-11-21 05:21:40.401304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.913 [2024-11-21 05:21:40.401459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.419387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.419556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:23.913 [2024-11-21 05:21:40.419677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.913 [2024-11-21 05:21:40.419703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.432877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.433048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:23.913 [2024-11-21 05:21:40.433104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.913 [2024-11-21 05:21:40.433118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.433178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.433196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:23.913 [2024-11-21 05:21:40.433206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.913 [2024-11-21 05:21:40.433219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.913 [2024-11-21 05:21:40.433256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.913 [2024-11-21 05:21:40.433266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:23.914 [2024-11-21 05:21:40.433286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.914 [2024-11-21 05:21:40.433296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.914 [2024-11-21 05:21:40.433357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.914 [2024-11-21 05:21:40.433377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:23.914 [2024-11-21 05:21:40.433387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.914 [2024-11-21 05:21:40.433395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.914 [2024-11-21 05:21:40.433429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.914 [2024-11-21 05:21:40.433439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:23.914 [2024-11-21 05:21:40.433447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.914 [2024-11-21 05:21:40.433456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.914 [2024-11-21 05:21:40.433502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.914 [2024-11-21 05:21:40.433512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:23.914 [2024-11-21 05:21:40.433521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.914 [2024-11-21 05:21:40.433532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.914 [2024-11-21 05:21:40.433588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.914 [2024-11-21 05:21:40.433598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:23.914 [2024-11-21 05:21:40.433629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.914 [2024-11-21 05:21:40.433643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.914 [2024-11-21 05:21:40.433801] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 47.813 ms, result 0 00:35:24.175 00:35:24.175 00:35:24.175 05:21:40 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:26.725 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:26.725 05:21:42 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:26.725 05:21:42 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:26.725 05:21:42 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94867 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94867 ']' 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94867 00:35:26.725 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94867) - No such process 00:35:26.725 Process with pid 94867 is not found 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94867 is not found' 00:35:26.725 Remove shared memory files 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_band_md /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_l2p_l1 /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_l2p_l2 /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_l2p_l2_ctx /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_nvc_md /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_p2l_pool /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_sb /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_sb_shm /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_trim_bitmap /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_trim_log /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_trim_md /dev/hugepages/ftl_a7e9664a-2cab-4c59-99ec-1785228b7f52_vmap 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:26.725 ************************************ 00:35:26.725 END TEST ftl_restore_fast 00:35:26.725 ************************************ 00:35:26.725 00:35:26.725 real 4m32.554s 00:35:26.725 user 4m19.465s 00:35:26.725 sys 0m12.680s 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.725 05:21:43 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:26.725 Process with pid 86507 is not found 00:35:26.725 05:21:43 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:26.725 05:21:43 ftl -- ftl/ftl.sh@14 -- # killprocess 86507 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@954 -- # '[' -z 86507 ']' 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@958 -- # kill -0 86507 00:35:26.725 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86507) - No such process 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 86507 is not found' 00:35:26.725 05:21:43 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:26.725 05:21:43 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97620 00:35:26.725 05:21:43 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:26.725 05:21:43 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97620 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@835 -- # '[' -z 97620 ']' 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.725 05:21:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:26.725 [2024-11-21 05:21:43.265919] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 23.11.0 initialization... 00:35:26.725 [2024-11-21 05:21:43.266211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97620 ] 00:35:26.725 [2024-11-21 05:21:43.420127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.986 [2024-11-21 05:21:43.459877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.558 05:21:44 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.558 05:21:44 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:27.558 05:21:44 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:27.820 nvme0n1 00:35:27.820 05:21:44 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:27.820 05:21:44 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:27.820 05:21:44 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:28.081 05:21:44 ftl -- ftl/common.sh@28 -- # stores=68d3e78e-0ba6-4a68-bf30-8fbf4d4310ae 00:35:28.081 05:21:44 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:28.081 05:21:44 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68d3e78e-0ba6-4a68-bf30-8fbf4d4310ae 00:35:28.341 05:21:44 ftl -- ftl/ftl.sh@23 -- # killprocess 97620 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 97620 ']' 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@958 -- # kill -0 97620 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@959 -- # uname 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97620 00:35:28.341 killing process with pid 97620 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97620' 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@973 -- # kill 97620 00:35:28.341 05:21:44 ftl -- common/autotest_common.sh@978 -- # wait 97620 00:35:28.914 05:21:45 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:28.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:29.176 Waiting for block devices as requested 00:35:29.176 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:29.176 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:29.176 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:29.436 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:34.728 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:34.728 Remove shared memory files 00:35:34.728 05:21:51 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:34.728 05:21:51 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:34.728 05:21:51 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:34.728 05:21:51 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:34.728 05:21:51 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:34.728 05:21:51 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:34.728 05:21:51 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:34.728 ************************************ 00:35:34.728 END TEST ftl 00:35:34.728 ************************************ 00:35:34.728 00:35:34.728 real 16m55.201s 00:35:34.728 user 18m56.854s 00:35:34.728 sys 1m24.886s 00:35:34.728 05:21:51 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.728 05:21:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:34.728 05:21:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:34.728 05:21:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:34.728 05:21:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:34.728 05:21:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:34.728 05:21:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:34.728 05:21:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:34.728 05:21:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:34.728 05:21:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:34.728 05:21:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:34.728 05:21:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:34.728 05:21:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.728 05:21:51 -- common/autotest_common.sh@10 -- # set +x 00:35:34.728 05:21:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:34.728 05:21:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:34.728 05:21:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:34.728 05:21:51 -- common/autotest_common.sh@10 -- # set +x 00:35:36.115 INFO: APP EXITING 00:35:36.115 INFO: killing all VMs 00:35:36.115 INFO: killing vhost app 00:35:36.115 INFO: EXIT DONE 00:35:36.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:36.950 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:36.950 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:36.950 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:36.950 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:37.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:37.557 Cleaning 00:35:37.557 Removing: /var/run/dpdk/spdk0/config 00:35:37.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:37.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:37.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:37.557 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:37.557 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:37.557 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:37.557 Removing: /var/run/dpdk/spdk0 00:35:37.557 Removing: /var/run/dpdk/spdk_pid69347 00:35:37.557 Removing: /var/run/dpdk/spdk_pid69516 00:35:37.557 Removing: /var/run/dpdk/spdk_pid69718 00:35:37.557 Removing: /var/run/dpdk/spdk_pid69805 00:35:37.557 Removing: /var/run/dpdk/spdk_pid69828 00:35:37.557 Removing: /var/run/dpdk/spdk_pid69940 00:35:37.557 Removing: /var/run/dpdk/spdk_pid69958 00:35:37.558 Removing: /var/run/dpdk/spdk_pid70140 00:35:37.558 Removing: /var/run/dpdk/spdk_pid70214 00:35:37.558 Removing: /var/run/dpdk/spdk_pid70293 00:35:37.558 Removing: /var/run/dpdk/spdk_pid70388 00:35:37.558 Removing: /var/run/dpdk/spdk_pid70468 00:35:37.558 Removing: /var/run/dpdk/spdk_pid70508 00:35:37.558 Removing: /var/run/dpdk/spdk_pid70539 00:35:37.818 Removing: /var/run/dpdk/spdk_pid70609 00:35:37.819 Removing: /var/run/dpdk/spdk_pid70704 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71124 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71177 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71218 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71234 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71292 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71308 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71377 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71382 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71435 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71442 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71490 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71502 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71640 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71671 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71749 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71916 00:35:37.819 Removing: /var/run/dpdk/spdk_pid71983 00:35:37.819 Removing: /var/run/dpdk/spdk_pid72014 00:35:37.819 Removing: /var/run/dpdk/spdk_pid72441 00:35:37.819 Removing: /var/run/dpdk/spdk_pid72535 00:35:37.819 Removing: /var/run/dpdk/spdk_pid72633 00:35:37.819 Removing: /var/run/dpdk/spdk_pid72678 00:35:37.819 Removing: /var/run/dpdk/spdk_pid72698 00:35:37.819 Removing: /var/run/dpdk/spdk_pid72782 00:35:37.819 Removing: /var/run/dpdk/spdk_pid73385 00:35:37.819 Removing: /var/run/dpdk/spdk_pid73416 00:35:37.819 Removing: /var/run/dpdk/spdk_pid73885 00:35:37.819 Removing: /var/run/dpdk/spdk_pid73978 00:35:37.819 Removing: /var/run/dpdk/spdk_pid74082 00:35:37.819 Removing: /var/run/dpdk/spdk_pid74124 00:35:37.819 Removing: /var/run/dpdk/spdk_pid74144 00:35:37.819 Removing: /var/run/dpdk/spdk_pid74169 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76006 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76127 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76131 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76143 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76187 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76197 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76209 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76248 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76252 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76264 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76309 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76313 00:35:37.819 Removing: /var/run/dpdk/spdk_pid76325 00:35:37.819 Removing: /var/run/dpdk/spdk_pid77710 00:35:37.819 Removing: /var/run/dpdk/spdk_pid77796 00:35:37.819 Removing: /var/run/dpdk/spdk_pid79186 00:35:37.819 Removing: /var/run/dpdk/spdk_pid80961 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81020 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81084 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81189 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81271 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81361 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81419 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81489 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81589 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81670 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81760 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81812 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81885 00:35:37.819 Removing: /var/run/dpdk/spdk_pid81984 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82065 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82155 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82208 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82277 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82371 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82457 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82543 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82606 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82669 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82738 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82801 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82899 00:35:37.819 Removing: /var/run/dpdk/spdk_pid82981 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83065 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83124 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83187 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83255 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83322 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83421 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83501 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83639 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83906 00:35:37.819 Removing: /var/run/dpdk/spdk_pid83932 00:35:37.819 Removing: /var/run/dpdk/spdk_pid84379 00:35:38.080 Removing: /var/run/dpdk/spdk_pid84557 00:35:38.080 Removing: /var/run/dpdk/spdk_pid84646 00:35:38.080 Removing: /var/run/dpdk/spdk_pid84739 00:35:38.080 Removing: /var/run/dpdk/spdk_pid84782 00:35:38.080 Removing: /var/run/dpdk/spdk_pid84802 00:35:38.080 Removing: /var/run/dpdk/spdk_pid85100 00:35:38.080 Removing: /var/run/dpdk/spdk_pid85139 00:35:38.080 Removing: /var/run/dpdk/spdk_pid85195 00:35:38.080 Removing: /var/run/dpdk/spdk_pid85567 00:35:38.080 Removing: /var/run/dpdk/spdk_pid85705 00:35:38.080 Removing: /var/run/dpdk/spdk_pid86507 00:35:38.080 Removing: /var/run/dpdk/spdk_pid86623 00:35:38.080 Removing: /var/run/dpdk/spdk_pid86782 00:35:38.080 Removing: /var/run/dpdk/spdk_pid86867 00:35:38.080 Removing: /var/run/dpdk/spdk_pid87154 00:35:38.080 Removing: /var/run/dpdk/spdk_pid87396 00:35:38.080 Removing: /var/run/dpdk/spdk_pid87748 00:35:38.080 Removing: /var/run/dpdk/spdk_pid87903 00:35:38.080 Removing: /var/run/dpdk/spdk_pid88076 00:35:38.080 Removing: /var/run/dpdk/spdk_pid88113 00:35:38.080 Removing: /var/run/dpdk/spdk_pid88316 00:35:38.080 Removing: /var/run/dpdk/spdk_pid88331 00:35:38.080 Removing: /var/run/dpdk/spdk_pid88373 00:35:38.080 Removing: /var/run/dpdk/spdk_pid88598 00:35:38.080 Removing: /var/run/dpdk/spdk_pid88819 00:35:38.080 Removing: /var/run/dpdk/spdk_pid89281 00:35:38.080 Removing: /var/run/dpdk/spdk_pid90003 00:35:38.080 Removing: /var/run/dpdk/spdk_pid90749 00:35:38.080 Removing: /var/run/dpdk/spdk_pid91524 00:35:38.080 Removing: /var/run/dpdk/spdk_pid91676 00:35:38.080 Removing: /var/run/dpdk/spdk_pid91753 00:35:38.080 Removing: /var/run/dpdk/spdk_pid92282 00:35:38.080 Removing: /var/run/dpdk/spdk_pid92328 00:35:38.080 Removing: /var/run/dpdk/spdk_pid92750 00:35:38.080 Removing: /var/run/dpdk/spdk_pid93168 00:35:38.080 Removing: /var/run/dpdk/spdk_pid93902 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94025 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94056 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94113 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94159 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94212 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94432 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94512 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94575 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94636 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94666 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94722 00:35:38.080 Removing: /var/run/dpdk/spdk_pid94867 00:35:38.080 Removing: /var/run/dpdk/spdk_pid95072 00:35:38.080 Removing: /var/run/dpdk/spdk_pid95668 00:35:38.080 Removing: /var/run/dpdk/spdk_pid96384 00:35:38.080 Removing: /var/run/dpdk/spdk_pid96937 00:35:38.080 Removing: /var/run/dpdk/spdk_pid97620 00:35:38.080 Clean 00:35:38.080 05:21:54 -- common/autotest_common.sh@1453 -- # return 0 00:35:38.080 05:21:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:38.080 05:21:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:38.080 05:21:54 -- common/autotest_common.sh@10 -- # set +x 00:35:38.341 05:21:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:38.341 05:21:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:38.341 05:21:54 -- common/autotest_common.sh@10 -- # set +x 00:35:38.341 05:21:54 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:38.341 05:21:54 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:38.341 05:21:54 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:38.341 05:21:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:38.341 05:21:54 -- spdk/autotest.sh@398 -- # hostname 00:35:38.341 05:21:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:38.341 geninfo: WARNING: invalid characters removed from testname! 00:36:04.926 05:22:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:07.476 05:22:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:09.384 05:22:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:12.689 05:22:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:15.239 05:22:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:17.790 05:22:34 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:20.332 05:22:36 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:20.332 05:22:36 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:20.332 05:22:36 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:20.332 05:22:36 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:20.332 05:22:36 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:20.332 05:22:36 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:20.332 + [[ -n 5767 ]] 00:36:20.332 + sudo kill 5767 00:36:20.343 [Pipeline] } 00:36:20.359 [Pipeline] // timeout 00:36:20.364 [Pipeline] } 00:36:20.379 [Pipeline] // stage 00:36:20.384 [Pipeline] } 00:36:20.400 [Pipeline] // catchError 00:36:20.410 [Pipeline] stage 00:36:20.413 [Pipeline] { (Stop VM) 00:36:20.426 [Pipeline] sh 00:36:20.712 + vagrant halt 00:36:23.263 ==> default: Halting domain... 00:36:29.871 [Pipeline] sh 00:36:30.158 + vagrant destroy -f 00:36:32.698 ==> default: Removing domain... 00:36:32.970 [Pipeline] sh 00:36:33.256 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:33.291 [Pipeline] } 00:36:33.310 [Pipeline] // stage 00:36:33.315 [Pipeline] } 00:36:33.332 [Pipeline] // dir 00:36:33.337 [Pipeline] } 00:36:33.353 [Pipeline] // wrap 00:36:33.359 [Pipeline] } 00:36:33.374 [Pipeline] // catchError 00:36:33.384 [Pipeline] stage 00:36:33.387 [Pipeline] { (Epilogue) 00:36:33.402 [Pipeline] sh 00:36:33.723 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:39.011 [Pipeline] catchError 00:36:39.014 [Pipeline] { 00:36:39.028 [Pipeline] sh 00:36:39.316 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:39.316 Artifacts sizes are good 00:36:39.326 [Pipeline] } 00:36:39.339 [Pipeline] // catchError 00:36:39.349 [Pipeline] archiveArtifacts 00:36:39.356 Archiving artifacts 00:36:39.478 [Pipeline] cleanWs 00:36:39.491 [WS-CLEANUP] Deleting project workspace... 00:36:39.491 [WS-CLEANUP] Deferred wipeout is used... 00:36:39.499 [WS-CLEANUP] done 00:36:39.501 [Pipeline] } 00:36:39.518 [Pipeline] // stage 00:36:39.523 [Pipeline] } 00:36:39.538 [Pipeline] // node 00:36:39.545 [Pipeline] End of Pipeline 00:36:39.588 Finished: SUCCESS